20171227

A World Government Proposal

What is the biggest problem of our world or the whole human civilization?

Is it lacking full healthcare or good education for everyone?
Or not enough food and/or water for everyone?
Not (good) enough jobs/money for everyone?
Not (good) tech and/or living standards for everyone?
How about the lack of permanent world peace or the lack of (a fair and effective) world government, that quickly solves all problems between any countries of our world?

If we look at the history of our world, we see many big and small nations, city states, kingdoms, empires, countries make wars from time to time.
The problem is how much damage happens to so many people and their property in each war!
Also consider, thanks to keep advancing military weapons (because of the constant power competition between all countries of the world), possible damage from any big/small war is keep increasing all the time, fast!

Besides of wars, big/small countries of our world, also keep running into disagreements that bring bad consequences for one side or both.

Also, who can say the outcome of any particular war/disagreement was really fair for everyone?

Okay but how do we know creating a fair and effective world government is even possible?
Currently we have UN (as a second attempt I believe) but it obviously cannot solve all international problems. Can we really create a new UN (more like a true world government) that is fair and effective for any and all international problems?

In the current UN voting/decision system, each member country has a right for a single vote for each decision. But any (permanent or temporary) member country of the security council has a right for preventing any UN decision. Can we really create a more fair and effective voting/decision system?

What if there was no security council to prevent any decision? Then, obviously, biggest and/or most powerful countries would not accept to have equal voting rights with smallest and/or least powerful countries, and rightfully so IMHO. If so then, what if we find a way to fairly (each year re-)calculate how many votes each country needs to have?

Obviously, first, we would need to find a fair and equal way to evaluate each and all member countries. Imagine in the end of evaluations, each country gets an overall score, to be used as the weight, to calculate the number of its votes.

And to calculate overall scores, what if we put together a big international group of scientists/experts, and asked them to determine a standard set of statistics (and their weights) to evaluate economic/military/industrial/technological/scientific power, land size, population, living standards, healthcare, education of each and all member countries, to calculate a (sub)score for each, and later, apply a standard set of weights to all sub-scores, and add them together, to calculate overall scores for each member country?

20171224

EMPEROR IS NAKED

What really is the current state of String/M Theory/Theories?
And what practically it really means for the community of phycisists/universities?

IMHO:

How about we always judge current theories of phycists based on, currently and objectively, how much chance (as probability), they (still) have for being correct?
And what exactly, each range of probability of correctness, means for us practically?

I think second question is obviously easier to answer:

1) A new and promising theory of physics comes along then what to do?
New phycisists (or their percentage who really like challenges and taking risks), start studying it, theoretically, experimentally, observationally, by all means.
(Or existing phycisists, who are considering changing their area of expertise, and have similar character :-)

2) Current theoretical/experimental/observational results indicate, the theory, (very) unlikely to be correct then what to do?
Existing experts should continue working on it, by all means.
New phycisists (or ...), should choose studying that theory, with a distribution matching to its current (and objective) probability of correctness.

3) Current theoretical/experimental/observational results indicate, the theory, (very) likely to be correct then what to do?
Again, existing experts should continue working on it, by all means.
Again, new phycisists (or ...), should choose studying that theory, with a distribution matching to its current (and objective) probability of correctness.
(Obviously, in the cases of current state of Quantum Mechanics and Relativity, each definitely have .999... probability of correctness, each in their own domain/scale.)

At least ideally, whenever the current probability of correctness of a theory goes lower (because of new theoretical/experimental/observational results) then current percentage of new people choosing to study it also should go lower, and vice versa.

In the case of current state of String/M Theory/Theories/Frameworks, the following is my current personal view/judgement, as just a big fan of physics and nothing more:
I think theoretically or observationally there is really nothing to favor or disfavor String Theory.
And I think, the experimental results (mainly from LHC), directly disfavor Super Symmetry, and indirectly String Theory.
So I think what it practically means for the world of physics is number two above, unless new experimental/observational results change its probability of correctness in the future.

For the reader who read everything above:
My first idea for the title was a simple yet glorious one, "Theories Of Physics".
Later I changed it to "On Theories Of Physics".
Later I changed it again to "Descent of (String) Theory" :-)

20171223

ISRAEL AND PALESTINE

The problem between Israel and Palestine was going on for a long time and is does not look like will be resolved anytime soon, IMHO.
I would ask this, are we all want a peaceful solution or not?

We all as the humanity, should always try to find peaceful solutions to any problems between ourselves, or not?
If we all want peace, we should always try to make sure absolutely all possible solutions are considered, or not?

Can we really say, all possible solutions are considered for the problem between Israel and Palestine?
And all of them rejected by Israel and/or Palestine?

I really do not know the answer, and I really do not think anybody else knows the answer, either (but that is just my guess).
I think it is because the problem seems really big and complicated, with a long history.

I think scientific/logical approach would be, try to define the problem precisely first, and try to make a list of all possible solutions, to consider all later.
Obviously, to precisely define the problem, we should try to simplify it as much as possible, at first.
Can we really simplify the problem between Israel and Palestine, so that almost anybody can understand, what is the problem, exactly?
How about trying to find a simple analogy for it first?

Here is an idea (IMHO):

Imagine, you are the current owner of a little farm, and living there with your family.
Your farm was owned by your family going back many generations.
Your farm also contains a little land which is an extremely holy site for you, for your religion.
But you also have a rich neighbor, who really wants to own your land, because, the distant ancestors of the neighbor also owned the same land, for many generations.
And also, the holy site in your little farm was in fact, was first build and used by the distant ancestors of your neighbor, and it is also an extremely holy site for your neighbor, for his religion.
You and your neighbor were keep arguing (even fighting) for many years without finding any solution.

Obviously, the most simple solution would be both sides leaving each other alone, but imagine, that never happened, and does not look like will ever happen.

What other solution possibilities could be considered for the problem?

Here is an idea (IMHO):

What if your rich neighbor offers to buy your farm, for a more than fair, really good price for you?
But how you can sell such holy place for your family to anyone else?

But what if, first, you and your neighbor determined the exact location and size of the common holy land, for both of you?
What if, that holy land would be always belong to you and your neighbor, with equal ownership, as part of the agreement.
What if, you both also determine another co-ownership land, next to the holy land/site, to build a common headquarters, to provide security, repairs, cleaning and so on, to the holy land/site.
What if, you both provided an equal number of people for all services, and they always have to work together, enter, leave the holy land/site.
What if, you both first determined, what are the exact rules for any visitors, what kind of possible situations must be handled, and exactly how.
(Like, for example, what if, multiple groups of visitors from both sides/religions want to use the same part of the holy site, at the same time? Maybe a common scheduling system could be created?)

Assuming you and your neighbor agreed upon the holy site (somehow), would you be okay with selling the rest of your farm to your rich neighbor?
Or you would want to keep arguing/fighting with your neighbor farm? (And you and your neighbor both keep getting/causing harm/damage.)

I think, if I was in this situation, I would be okay with selling (non-holy site part of) my farm, but only if, it is absolutely certain I would get a new (and better) farm.

So, what if, Palestine sells all its land to Israel (except the holy site), for an agreed price.
What if, as part of the agreement, we need to find another country in the Middle East, willing to sell a large piece of land to the people of Palestine, to build a new (modern and luxurious) city, and start their own new independent country there?
What if, as part of the agreement, UN would need to officially recognize the new Palestine, as a new and independent country?
What if, as part of the agreement, Israel (and UN/US?) would need to guarantee protection of the new Palestine state, against any possible future invasion/takeover attempts, by anyone from outside?

If we look at the history of our world, I think there countless times a whole nation (country) got relocated.
Also some large lands were bought/sold with agreement (sometimes fair, sometimes not), instead of war/invasion.
Can we do it in our modern times, without war/invasion, and with a really agreed upon and fair deal for both sides?

IMHO, one thing is certain, humanity would be gaining/accomplishing a lot, if we learn to solve our international big problems (at least), always in peaceful ways.  

20171222

SONIC BOOM ABATEMENT

How we can reduce sonic booms generated by supersonic passenger aircraft, so that they can easily fly over cities, all the time?

From what I read on Wikipedia, it seems to me there are two main strategies as,
try to reduce it by modifying aircraft shapes and/or surfaces,
try to divide it into multiple smaller sonic booms.
I am guessing even combining both methods is still not good enough.
If so then, can we find more strategies to combine with the others?

Since the problem is called '"sonic" boom', it is a sound problem, obviously.
How we take care of other loud noise problems?

For example, think about how noise cancelling earphones work.
What if each aircraft carries a powerful enough speaker, that generates anti-sound, for its own sonic boom (and all other noise?)?

Each sonic boom can be thought as a combination of countless simple sound waves, doing constructive interference.
If so then, can we modify supersonic aircraft designs, so that each sonic boom wave is generated as twins in opposite phases, like sound and anti-sound?
Or, can we modify supersonic aircraft designs, so that each simple sound wave generated, also has a twin in opposite phase?
Or, can we modify supersonic aircraft designs, so that all simple sound waves generated are in randomized phases, so that the aircraft generates non-loud white noise, instead of loud sonic boom?

20171220

General Theory Of Auto-driving

Today there are many companies working on autonomous cars/vehicles/drones/aircraft/ships/submarines. I have no idea how their software actually work. I also do not know, if there is a general theory for how auto-driving/piloting software should work.

Why we would need a general theory for it, right at the beginning?

I think a good example is the history of computer programming languages. What were their state, before and after, development of the general theory of computer programming language design?

I think the general theory of auto-driving/piloting, could be based on Game Theory in Computer Science.

For example, when a computer is playing Chess, to decide each next move, it generates as many as possible future moves for itself, and the other player. It evaluates the board states in those future moves from a winning/losing score point of view. Then it chooses the best move for max winning and min losing scores.

Now imagine we have a self-driving car:

It keeps track of all moving vehicles, people, animals, objects around. (Each could be represented as a moving box in a 3d non-moving world of boxes/surfaces.)

Every millisecond (or less), the software creates future possible moves, for the car itself, and all other moving objects. (So the car itself and each moving object is like the players of the same game.)
And it also evaluates those future possibilities for:
How close it is to the future navigation goals of the car?
How high is the chance of an unavoidable collision?
How high is the chance of an avoidable collision?
Even if a collision is certain, how the damage to the car itself can be minimized?
Even if a collision is certain, how the damage to the another vehicle can be minimized?
Even if a collision is certain, how the damage to the another person/animal can be minimized?

To generate future possibilities, it would need to consider things like, the car itself, each of all other vehicles around, each of all people/animals around slowing/accelerating/turning in many different (and physically possible) ways/directions.

Of course, the general method described above can be modified for piloting, instead of driving (just like the general algorithms known in Game Theory can be modified for playing different games).

20171217

Comparing Security Of Programming Languages

Many different programming languages are used to create all kinds of software. And computer software security is extremely important today, and probably will become even more important in the future.

Big Question:
Are all programming languages inherently equal in security viewpoint? Or some are really inherently more secure than some others? (And also, a related question, are some Operating Systems more secure than some others?) How we can compare them objectively for inherent (natural) security?

First, how software bugs are used to break security of computer systems, by hackers or malware?

I think they send a series of instructions/input data to any accessible software, to activate known (and unpatched) bugs. Which create unhandled runtime exceptions, like division by zero, buffer overflow/underflow, array out of bounds, dangling pointer,...
 
Now, for simplicity, assume we want to compare security of native executables, for a certain OS, compiled using a certain brand and version compiler, for a certain programming language (and its version), like C, C++, Delphi,...

Imagine if we created a table for objectively comparing security as follows:

First column: A (sorted) full list of common runtime exceptions like, division by zero, buffer overflow/underflow, array out of bounds, dangling pointer,...
Next, add one column for each certain language compiler.

Next, we fillout cell values of our table (where each will be -1 (No) or +1 (Yes)), by asking this question:

Is the runtime exception on the left, possible to happen, for the certain language compiler on the top? (Assume the programmer wrote any section of any compiled software, using that certain language compiler, and forgot to add any exception handling for it.)

(If the certain version OS, which we creating this table for, already have general safe handling, for any certain common runtime exception, so that it can never be used by hackers/malware, then we do not need to include it in our table, obviously.)

(If the runtime exception on the left, inherently cannot happen, for the certain language compiler on the top, then the cell value still must be -1 (No). Because that is still an advantage for the certain language compiler on the top. Since all programming languages are Turing-Complete, any algorithm can be implemented in any certain language compiler. Then we must conclude, if the runtime exception on the left, inherently cannot happen, there is no ability lost, but there is an inherent security gained.)

Then in the end, we can compare inherent security of each certain language compiler, which we included in our table, by simply calculating sum value of each column, as an inherent security score. Then smaller sum values would indicate higher inherent security.

But I think if we do statistical analysis on all existing software (for any certain OS version), then we would find, some kind of dangerous runtime exceptions are more common than others. That means if we know relative frequencies (RF) of each common runtime exception (bug) in our table, then we can make our inherent security scores more realistic/accurate, by using relative frequencies as a weight, for each runtime exception on the left.
(So then each cell value would be -1*RF or +1*RF.)

Can we use this kind of programming language compiler security scoring table, to also score and compare, security of different OSs (and their different versions)? I think the answer is yes.
Imagine if we re-evaluated the same security scoring table (same set of row and column titles),
for different OSs (and their different versions). Later, for each table, we calculated sum of all cells in the table, to get a total security score for that OS (version). (Then, again, smaller values would indicate higher inherent security.)

20171121

The Ultimate VR Device

Light, thin, sturdy, mobile, supercapacitor-charged computer (PC/MAC/Linux/NET/Android/iOS) glasses, that covers both eyes (and ears) completely.

It must be possible to wear it even while lying on the bed, even for many hours everyday, w/o feeling sick or tired.

It could be letting no light from outside to to the either eyes.
Or it could set for any level of transparency wanted, from 0% to %100, together and/or independently, for the eyes.

Sounds coming from outside also must be fully adjustable.

It must have WiFi, Bluetooth, USB (at least two most common type connectors) for minimum.

It has all kinds of sensors for VR and smartphone applications.

It must have a good mouse (it could be something similar to a short and fat pen (USB-chargeable), with a track-point at top, for example).

For software, it must have all kinds of commonly-used remote desktop connection/access software, if possible.
Also it would be better, if it had its own virtual keyboard(s), internet browser, media player, text/document/image/video viewer(s)/editor(s).

It should use (at least one) standard removable solid state memory card, as for its on-board long term file storage drive.

Also it would be better, if it also supported AR.

20171118

Baryon Asymmetry Problem

One of the biggest unsolved problems in physics is the Baryon Asymmetry (BA) problem. Why the Big Bang (BB) created more matter than antimatter?

I think the first question for BA really is, what was the mechanism of particle creation? I think the assumption in the world of physics, since the beginning of BB Theory, was Pair Production (PP).
But PP seems to be always creating matter-antimatter particle twins, so always equal amounts of matter and antimatter. Then what was the reason, the balance tipped towards the matter side?

But, is PP, the only possible mechanism of particle creation we know?

When an unstable matter/antimatter nucleus or particle decays, it also creates new particles and/or antiparticles, true? If true then, is not it possible, BB created new particles like an unstable particle decay event? If so then I think the most reasonable assumption would be, the whole universe was a single unstable (elementary) quantum particle at the beginning. (Or multiple particles? (Then most likely, an odd number of particles!))

If our universe began as a single unstable quantum particle suddenly decaying, what set of particles/antiparticles were the decay products then? Since our universe contains much more DM than matter, one of the decay products must be DM particles. I think we need to find out for certain (if possible), how DM/M ratio of our universe changed since the beginning. I think if the ratio was always constant, then it would mean DM and M particles were created with that ratio right in the beginning. And if amount of matter (vs DM), kept increasing since the beginning, then would mean DM was creating matter, since the beginning. (So it would also mean BB only created DM, initially.)

Clearly, each unstable nucleus/particle decay event, is like a very powerful explosion, at its own scale. So our universe beginning as a single unstable quantum particle decaying, would be definitely the mother/father of all explosions, thus truly deserving the name BB!
Which I think would also explain how/why the era of inflation (hyperfast expansion) happened, in the beginning times of our universe.

If our universe started as a single unstable (elementary) quantum particle, the biggest question would be this:
What were its properties? (All possible sets of quantum elementary particle state values for energy, wavelength/frequency, rest mass, spin, charge, color charge etc.)
Obviously the main constraint of the problem, is that the final result of the initial particle decay process, needs to be able to produce all DM/M of our known universe. (And I think that requires us, first considering how quantum properties of particle decay products, relate to quantum properties of the initial unstable particle that decayed.)

I think this maybe implying another constraint for the problem:
From Wikipedia:
"The presence of an electric dipole moment (EDM) in any fundamental particle would violate both parity (P) and time (T) symmetries. As such, an EDM would allow matter and antimatter to decay at different rates leading to a possible matter-antimatter asymmetry as observed today."
https://en.wikipedia.org/wiki/Baryon_asymmetry

See also:
https://en.wikipedia.org/wiki/Pair_production
https://en.wikipedia.org/wiki/Inflation_(cosmology)
https://en.wikipedia.org/wiki/Big_Bang
https://en.wikipedia.org/wiki/Dark_matter

20171109

A GUT OF QUANTUM GRAVITY

What really is spacetime and what really are elementary quantum particles?

Imagine spacetime is an emergent property which is a gas-like medium, created by virtual quantum particles keep popping in and out of existence for extremely short durations, which is also the medium of quantum vacuum. Imagine what flat spacetime is the volume where probabilities for creation of positive and/or negative energy/mass virtual particles are equal. Imagine positive curvature spacetime is the volume where probabilities for creation of positive energy/mass virtual particles are higher. Imagine negative curvature spacetime is the volume where probabilities for creation of negative energy/mass virtual particles are higher. (Realize then spacetime would be really a medium of probability.) Imagine when a region has excess positive energy available, positive energy/mass virtual particles are not just created more but stay in existence longer.
And whenever/wherever the energy is higher than necessary thresholds, virtual particles created as real particles. (And when a region has excess negative energy available instead, then negative energy/mass virtual/real particles are created similarly instead.)

Imagine when light passes thru spacetime regions with different positive/negative curvature, it is like passing thru gas/fluid regions with positive/negative index of refraction.

(So a positive energy/mass particle/object creates a field of positive spacetime curvature around of itself, which we call its gravitational field.)

Realize if gravitational field is polarization of virtual particles, then creating Casimir Force is actually creating artificial spacetime curvature/gravity!

Imagine all elementary quantum particles of Standard Model, which are used to create virtual particles, which create the gas-like spacetime medium, are really quasiparticles of a fluid-like medium, like bubbles created by a boiling fluid. Imagine that fluid-like medium is created by a Cellular Automaton Quantum Computer (CAQC) with Planck length scale cells of qubit registers. Imagine each elementary quantum particle is a like a cluster of information/probability. Probably like a spherical probability wave, traveling in the fluid-like medium created by the CA, maybe similar to CA used for fluid simulation, like LGCA(FHP)/LBM. (Also realize that what happens in CA used for fluid simulation, about predictability of the future (nature of time), is really similar to what happens in our real physical Universe:
In microscale future is unpredictable (particles move randomly), but it becomes more and more predictable with certainty, as we watch it in higher and higher scales. Imagine we just watch/observe that CA world by using bigger and bigger tiles, calculating average particle number/velocity/acceleration for each tile. Then the CA world starts following the rules of classical physics (Navier-Stokes Equation), better and better. Meaning the future becomes better and better predictable, as we observe the CA world in higher and higher scales.
Which is very similar to how future events are unpredictable with certainty in QM scale, compared to how future events are predictable with certainty in Relativity scale. And predictability of future events, is in between those two extremes, in Newton Mechanics (human) scale.)

If what are above are assumed to be true, then it would mean somehow quasiparticles of the Planck-scale medium, are allowed to exist only as a discrete and limited set, which are the elementary quantum particles of Standard Model. (So nothing like soap bubbles, which have a continuous size range, and also have identical/similar nature.)

Also obviously this Planck-scale medium has a limited max signal/information travel speed which we call the speed of light (c). So quantum particles without rest mass always travel at c.
And quantum particles with rest mass travel at lower speeds depending on their rest mass plus kinetic energy. What slows them down I am guessing is the Higgs particle field across our Universe.

So rest mass is like a binary property of elementary quantum particles, with possible values of 0 or 1. So if it is 1 then it creates a drag, moving thru the Higgs Field, because of interaction with it. Then its speed thru the Higgs field depends on its total energy (rest mass/energy plus kinetic energy, which determines size (wavelength) of the particle). And if its total energy is greater, then its size/wavelength is smaller, and it moves faster thru the Higgs field and so thru spacetime.

I think Standard Model is not complete and there are at least two more elementary particles to be discovered. I think one of them is Planck Particle and it must be what Black Holes are made of. I think the other must be the particle of Dark Matter (could it be graviton?).

Based on the ideas above, I think the recent discovery of "hot gas" in DM clouds/filaments, must be because of DM creating a positive spacetime curvature, which means higher probabilities for positive energy/mass virtual particles of quantum vacuum. (So it is a similar phenomenon to Hawking Radiation.)

But why elementary quantum particles have quantum properties/abilities like entanglement? I think it could be because reality is created by a Cellular Automaton Quantum Computer (CAQC) with Planck scale cells. So, since elementary quantum particles of SM are the quasiparticles of this CAQC, they also have quantum properties, since they are clusters of qubit information processed by a (CA) QC.

If gravitational fields are fields of (positive) spacetime curvature, and spacetime is a medium created by virtual particles, then how objects would attract each other? Obviously, a vacuum region with higher probabilities for positive energy/mass virtual particles, must be like a low pressure gas region of spacetime medium. And a vacuum region with higher probabilities for negative energy/mass virtual particles, must be like a high pressure gas region of spacetime medium. (Imagine each particle with positive energy/mass, is a region of positive curvature (of the Planck-scale medium), so when they group together in clusters (objects with mass), then they create a macroscale positive curvature region, like a low pressure gas region of the gas-like spacetime medium.)

20171102

The Table Of Elementary Quantum Particles

I think discovery of the Periodic Table (PT) of chemical elements, allowed accurate prediction of many new/unknown elements and their various properties. (If we are given, only atomic number and mass number of an element, can we accurately predict all its properties (nuclear, chemical, physical, electric, magnetic), using only Quantum Mechanics?) So the set of all chemical elements clearly have a basic (and standard) order (PT)! But there are also many known (and useful) alternative periodic tables (APT). (Isn't there any precise (and unique) mathematical/geometric object/structure/group/graph for the set of all chemical elements other than various table structures?) (And if so, then that object can explain all basic properties of all elements?) So we could say, the order of the set of all chemical elements is not really unique!

Do we really have any true equivalent of PT/APT for elementary quantum particles (of Standard Model)? I think the answer is really no! Because we could not found any clear order for energy/mass of elementary quantum particles, so far!

I think if there is truly no order (can we ever hope to prove that mathematically?), then it could be viewed as a sign of multiverse (or Intelligent Design?)! And if there is an order and its unique, then it could be viewed as a sign of natural inevitability of our reality/universe. My guess is, it will turn out similar to PT/APT situation of the set of all chemical elements (a non-unique order)!

What we can do to find it/them, if really exist?

I think as a first step, we should try to create a basic (and standard) table for elementary quantum particles. It needs to be sorted by particle (rest) energy (since we are trying to explain order of that primarily), and it surely needs to be simplified using Planck Units.

Here is a proposal for a basic (and standard) table of elementary quantum particles:

Column 0: Name/symbol of the elementary particle

Column 1: Compton Wavelength of the elementary particle in Planck Length Units

Column 2: Corresponding Compton Frequency of the elementary particle

Column 3: Is the elementary particle have rest mass?: Y/N

Column 4: Electric Charge (in Electron Charge units) (Or, is there a Planck unit for electric charge?)

Column 5: Spin

Column 6: Color Charge

(Table needs to be sorted (ascending/descending) by column 1 values, by default.)

https://en.wikipedia.org/wiki/Chemical_element
https://en.wikipedia.org/wiki/List_of_chemical_elements
https://en.wikipedia.org/wiki/Atomic_number
https://en.wikipedia.org/wiki/Mass_number
https://en.wikipedia.org/wiki/Periodic_table
https://en.wikipedia.org/wiki/Alternative_periodic_tables
https://en.wikipedia.org/wiki/Standard_Model

20171028

Production Of Quantum Particles

How quantum particles maybe produced by our Universe?

Assume our reality is created by a CA QC operating at Planck Scale.
Assume it creates a Planck Scale Particle based fluid medium, just like LBM (CA) creates 2d/3d fluid simulation.
Assume, when that fluid medium starts boiling, it creates bubbles (which are its quasiparticles).
And since the cells of the CA QC are qubit (register(s)) based, those bubbles/quasiparticles have quantum properties.

So assume, our universal fluid medium, creates bubbles/quasiparticles (quantum particles),
as (positive/negative energy) virtual/real single/pair particle/antiparticle, depending on local conditions.

Assume, our perception of spacetime is created by virtual particles of quantum vacuum.
Assume, gravitational field is polarization of spacetime.
Assume, positive spacetime curvature is actually quantum vacuum producing more positive energy virtual particles than negative.
Assume, negative spacetime curvature is actually quantum vacuum producing more negative energy virtual particles than positive.
(So Casimir Force is actually creating artificial gravity/anti-gravity!)
And if the (positive/negative) curvature is beyond necessary threshold, then a real particle (pair) is produced, instead of a virtual particle (pair).

So we can say:
Amplitude of spacetime curvature decides virtual or real particle (pair) will be produced.
Sign of spacetime curvature decides positive/negative energy/mass particle (pair) will be produced.
Polarization/Rotation/Spin of spacetime curvature (?) decides particle and/or anti-particle will be produced.

Matter And Dark Matter

Assume, in the beginning of The Big Bang, the Universe was a ball of positive energy, in the middle of a medium of negative energy.
Later it started absorbing negative energy and so started expanding.
As its positive energy density dropped below a threshold, DM particles got created near uniformly everywhere. As the Universe continued to expand, DM particles coalesced into filaments of the cosmic web.

The BB also created hydrogen and helium uniformly everywhere.
Later DM filaments provided guidance for matter, stars and galaxies to form. But we must realize this view leads to Baryon Asymmetry Problem!

What if, matter of our Universe got created thru a different mechanism, which is asymmetric?

If we look at our Universe, it looks like matter is coalesced in the central regions of DM filaments/clouds. What if matter is not coalesced, but got created in those central regions of DM clouds?

What if, whenever wherever DM cloud density goes above a certain threshold, particles of Standard Model are created, without their anti-particles? (And then later DM cloud density would drop below the threshold there, like a negative feedback mechanism. And if so that would mean total amount of DM in the Universe must be decreasing over time!)

And what if, DM particles are gravitons with extremely low mass/energy, and so with extremely large size (Compton Wavelength)?
So that maybe why we cannot detect them directly and why they cannot join with each other to create a BH etc. (There maybe a similar rule for them like Pauli Exclusion Principle?)

About Graviton from Wikipedia:

"The analysis of gravitational waves yielded a new upper bound on the mass of gravitons, if gravitons are massive at all. The graviton's Compton wavelength is at least 1.6×10^16 m, or about 1.6 light-years, corresponding to a graviton mass of no more than 7.7×10^-23 eV/c2.[17] This relation between wavelength and energy is calculated with the Planck-Einstein relation, the same formula which relates electromagnetic wavelength to photon energy."

https://en.wikipedia.org/wiki/Dark_energy
https://en.wikipedia.org/wiki/Dark_matter
https://en.wikipedia.org/wiki/Graviton
https://en.wikipedia.org/wiki/Pauli_exclusion_principle
https://en.wikipedia.org/wiki/Pair_production
https://en.wikipedia.org/wiki/Two-photon_physics
https://en.wikipedia.org/wiki/Baryon_asymmetry
https://en.wikipedia.org/wiki/Standard_Model

20171024

Spacetime Curvature And Speed Of Light

What if Gravity is the 5th emergent dimension? (So mass/energy of a particle is its gravity dimension location (+ or -).
(2D surface of a sphere is bent in 3rd dimension. 4D spacetime is bent in the 5th (Gravity) dimension whenever (+ or -) energy/mass is present.)

When a positive spacetime curvature is present, speed of light must slowdown passing from that location. (Just like light slows down when it enters water from air and refracts.)
And if so, then how index of refraction and current speed of light can be calculated for any spacetime location?

Spacetime curvature (which we can calculate) determines deflection angle (which we can also calculate).
Using Snell's Law:
sin(t0)/sin(t1)=v0/v1=n1/n0
Also if c is the speed when there is no curvature.
And if we plug in the values we know/assume then this is what we have:
sin(t0)/sin(t1)=c/v1=n1/n0

We can calculate total bending (deflection) angle of light (in radians) in General Relativity:
deltaPhi=4*G*M/C^2/R (M:Mass in kg; R:Distance from center in meters; C:Speed of light in m/s; G:6.7E-11)

Assume incoming angle of light is 90 degrees (pi/2 radians) (for refraction index=1 because n=c/v and no spacetime curvature in the first medium):
=> 1/sin(t1)=c/v1=n1/1 =>
1/sin(deltaPhi)=c/v1=n1/1 =>
1/sin(4*G*M/c^2/r)=c/v1=n1/1 =>
n1=c/v1=1/sin(4*G*M/c^2/r) (Index of refraction for any spacetime location bending light)
(=> Possible extreme values:1/0=inf or -inf depending on direction of approach;1/1=1;1/-1=-1 => Range: -inf to +inf)
v1=c/n1=c*sin(4*G*M/c^2/r) (current speed of light for any spacetime location bending light)
(=> Possible extreme values:c*0=0;c*1=c;c*(-1)=-c => Range: -inf to +inf)
(Negative c would mean time is flowing backwards there!? c is the flow rate of time (event information flow (perception) rate) anywhere.)
(So it is not possible to make time move faster than c but it can be slowed and its direction maybe changed using negative energy/mass.)
(Light slows down in (positive) gravitational field because it is denser from light point of view. Imagine more positive energy/mass virtual particles on the way.

Gravitational field is actually local polarization of the virtual particle (each with + or - energy/mass) balance at any spacetime location.) If the total net energy is negative then curvature would be negative. Then index of refraction would also be negative.

(The speed of light anywhere is speed of information flow between the CA cells which determine perception of events in Relativity by any observer.)

https://en.wikipedia.org/wiki/Refractive_index
https://en.wikipedia.org/wiki/Snell%27s_law

20171022

Geometry of Our Universe 2

http://scienceblogs.com/startswithabang/2017/10/22/comments-of-the-week-final-edition/

Ethan wrote:
"From Frank on the curvature of the Universe: “What if Universe is surface of a 4d sphere where 3d surface (space) curved in the 4th dimension (time)?”"
"Well, there is curvature in the fourth dimension, but the laws of relativity tell you how the relationship between space and time occur. There’s no wiggle-room or free parameters in there. If you want the Universe to be the surface of a 4D sphere, you need an extra spatial dimension. There are many physics theories that consider exactly that scenario, and they are constrained but not ruled out."

Then what if I propose, gravitational field across the Universe is the fifth dimension (for the Universe to be the surface of a 4D sphere)? (And also think about why it seems gravity is the only fundamental force that effects all dimensions. Couldn't it be because gravity itself is a dimension, so it must be included together with other dimensions (of spacetime) in physics calculations.)

And why it is really important to know general shape/geometry of the Universe?

I think then we can really answer whether observable universe and global universe are the same or not, and if they are the same then we would also know that the Universe is finite in size. (And we could also calculate general curvature of the Universe for anytime, which would help cosmology greatly, no doubt.)

I am guessing currently known variations in CMB map of the Universe, match to the distribution of matter/energy in the observable Universe, only in a general (non-precise) way. I think, if the Universe is really the 3d (space) surface of a 4D sphere, curved in the 4th dimension (time), (with gravity as the 5th dimension), then, we could use CMB map of the Universe as CT scan data, and could calculate 3d/4d matter/energy distribution of the whole Universe from it. And then, if it matches (as a whole) to the matter/energy distribution of our real observational Universe, (which coming from other (non-CMB) observations/calculations), then we could know for sure, whether our observational and global Universes are identical or not. (If not, then by looking at the partial match, maybe we could still deduce how large really is our global Universe.)

Further speculation:

Let's start with, spacetime is 4D (3 space dimensions and a time dimension).
Gravitational curvature at any spacetime point must be a 4D value => 4 more dimensions for the Universe.
If electric field at any spacetime point is a 4D value => 4 more dimensions for the Universe.
If magnetic field at any spacetime point is a 4D value => 4 more dimensions for the Universe.
Then the Universe would have 4+4+4+4=16 dimensions total!
(Then the dimensions of the Universe could be 4 quaternions = 2 octonions = 1 sedenion.)
(But if electric and magnetic fields require 3d + 3d, then the dimensions of the Universe would be 4+4+3+3=14 dimensions!)

20171028:
If our Universe has 16 dimensions and if our reality is created by a CA QC at Planck Scale, then its cell neighborhood maybe like a tesseract or a double-cube (16 vertices). Or if our Universe has 14 dimensions and if our reality is created by a CA QC at Planck Scale, then its cell neighborhood maybe like a Cube-Octahedron Compound or Cube 2-Compound (14 vertices).

(20171104) What if Kaluza–Klein Theory (which unites Relativity and Electromagnetism, using a fifth dimension), is actually correct by taking gravitational field across the universe as the fifth (macro/micro) dimension? (Maybe compatibility with Relativity requires taking it as a macro, and QM requires taking it as a micro dimension? (Which would be fine!?))

(20171115) According to Newton Physics, speed of any object in the Universe always is:
|V|=(Vx^2+Vy^2+Vz^2)^(1/2) or V^2=Vx^2+Vy^2+Vz^2
But according to Special Theory of Relativity, it really is:
C^2=Vx^2+Vy^2+Vz^2+Vt^2 which also means Vt^2=C^2-Vx^2-Vy^2-Vz^2 and so |Vt|=(C^2-Vx^2-Vy^2-Vz^2)^(1/2)
So, if gravitational field across the Universe is actually its 5th (macro) dimension then:
C^2=Vx^2+Vy^2+Vz^2+Vt^2+Vw^2 which also means Vw^2=C^2-Vx^2-Vy^2-Vz^2-Vt^2 and so |Vw|=(C^2-Vx^2-Vy^2-Vz^2-Vt^2)^(1/2)
(Is this the equation to calculate spacetime curvature from 4D velocity in General Relativity?)
(Equivalence Principle says gravity is equivalent to acceleration => Calculate its derivative?)

20171021

Explaining Masses of Elementary Quantum Particles

How we can explain masses of elementary quantum particles?

All elementary quantum particles have energy, some in the form of (rest) mass. Then (rest) mass value of each particle is just 0 or 1.

Then what really needs to be explained is energy distribution (order) of list of elementary quantum particles.

We already know energy of each particle is quantized (discrete) in a Planck unit. (Then energy of each elementary particle is an integer.) And Compton Wavelength of each particle can be seen as its energy/size.

Then what needs to be explained is this:

Imagine we made a (sorted) bar chart of energies of elementary quantum particles. Then, is there a clear order of how energy changes from lowest to highest?

Or what if we made a similar sorted bar chart of particle Compton Wavelengths?

Or what if we made a similar sorted bar chart of particle Compton Frequencies?

Realize that the problem we are trying to solve is a kind of curve fitting problem.

Also realize we are really treating the data as a time series here.
But how do we know really, if our data is a time series?

Also realize that, if we consider the case of sorted bar chart of particle Compton Frequencies, then what we really have is a frequency distribution (not a time series).

Wikipedia says: "The Fourier transform decomposes a function of time (a signal) into the frequencies that make it up"

Then what if, we apply Inverse Fourier Transform to the Compton frequency distribution of elementary quantum particles?

Would not, we get a time series that we could use for curve fitting?

(Also, would not be possible then, that curve we found, could allow us to predict, if there are any smaller or larger elementary particles which we did not discover yet?)

https://en.wikipedia.org/wiki/Fourier_transform
https://en.wikipedia.org/wiki/Curve_fitting
https://en.wikipedia.org/wiki/Time_series

20171018

Geometry of Our Universe

The following are my comments recently published at:
http://scienceblogs.com/startswithabang/2017/10/14/ask-ethan-is-the-universe-finite-or-infinite-synopsis/

@Ethan:
“If space were positively curved, like we lived on the surface of a 4D sphere, distant light rays would converge.”
Think of surface of a 3d sphere first:
It is a 2d surface curved in the 3rd dimension.
Now think of surface of a 4d sphere:
It is a 3d surface curved in the 4th dimension.
What if Universe is surface of a 4d sphere where 3d surface (space) curved in the 4th dimension (time)?
So is it really not possible, 3d space we see using our telescopes, could be flat in those 3 dimensions of space, but curved in time dimension?

First let me try to better explain what I mean exactly:
Let’s first simplify the problem:
Assume our universe was 2d, as the surface of a 3d sphere. Now latitude and longitude are our 2 space dimensions. Our distance from the center of the sphere is our time dimension.

Since our universe is the surface of a 3d sphere, it has a general uniform positive curvature, depending on our time coordinate, anytime.

Now the big question is this:
As beings of 2 dimensions now, can we directly measure the global uniform curvature of our universe in any possible way? Or asking the same question in another way would be this: Our universe would look curved or flat to us?

If speed of light was high enough, and if we had an astronomically powerful laser, we could send a beam in any direction, and later see it came back from exact opposite direction, sometime later.
Then we would know for certain our universe if finite.
But I claim, we still would not know what is the general curvature of our universe.

Could we really find/measure it by observing the stars or galaxies around, in our 2d universe?

For answer, first realize we don’t know any poles for our universe. We can use any point in our 2d universe as our North Pole, would it make any difference for coordinates/measurements/observations?
Then why not take our location in our 2d universe as the north pole of our universe.

Now try to imagine all longitude lines coming into our location (the north pole our coordinate system) as the star/galaxy lights.
Can we really see/measure the general curvature of our universe from those light beams coming to us from every direction we can see?
I claim the answer is no.

Why? I claim, as long as we are making all observations and experiments, to calculate the general curvature, using only our space dimensions (latitude and longitude),
we would always find it to be perfectly flat in those 2 dimensions. I also claim, we could calculate the general curvature of our 2d universe (latitude and longitude), only if we include the precise time coordinates in the measurements/experiments, as well as precise latitude and longitude coordinates.

So I really claim, our universe looks flat to us, because we are making all observations/measurements in 3 space dimensions. But if we also include time coordinates, then we can calculate true general curvature of our universe.

And I further claim:

Curvature of circle (1d curved line on 2d space):
1/r

Curvature of sphere (2d curved plane on 3d space):
1/r^2

Curvature of sphere (3d curved space on a 4d space):
1/r^3

So if our universe was 2d space and 1 time (2d curved plane on 3d space):
Its general curvature at any time would be:
1/r^2=1/(c*t)^2 (where c is the speed of light and t time passed since The Big Bang in seconds)

And so if our universe is 3d space and 1 time (3d curved space on 4d space):
Then its general curvature at any time is:
1/r^3=1/(c*t)^3 (where c is the speed of light and t time passed since The Big Bang in seconds)

And I further claim:

If astrophysicists recalculated general curvature of our universe, by including all space and time coordinate information correctly, then they should be able to verify, the calculation results always match to the theoretical value which is 1/(c*t)^3 .

The raw data to use for those calculations would be the pictures of universe, for the same direction, looking at views there from different times.

I realized this value for the current general curvature of our universe (1/(c*t)^3) would be correct only if we ignore the expansion of the universe. To get correct values for any time, we need to use current radius of the universe for that time, including effect of the expansion until that time.

Wikipedia says:
“it is currently unknown whether the observable universe is identical to the global universe”

From what I claimed above, I claim they are identical.

(So if the current radius of observational universe is 46 Bly, then I claim it means current global curvature of our universe is 1/(46 Bly in meters)^3.)

Dark Matter and Nature of Gravitational Fields And Spacetime

The following are my comments recently published at:
http://scienceblogs.com/startswithabang/2017/10/10/missing-matter-found-but-doesnt-dent-dark-matter-synopsis/

“Neutral atoms formed when the Universe was a mere 380,000 years old; after hundreds of millions of years, the hot, ultraviolet light from those early stars hits those intergalactic atoms. When it does, those photons get absorbed, kicking the electrons out of their atoms entirely, and creating an intergalactic plasma: the warm-hot intergalactic medium (WHIM).”
So the UV light from earliest stars keeping the intergalactic gas hot (and does it perfectly for all gas atoms somehow).
But how it is possible that UV light photons stayed same after billions of years of expansion of universe?
I have a really crazy idea on this WHIM which maybe a better explanation though:
What if WHIM is no ordinary gas?
What if WHIM is an effect similar to Hawking Radiation?

What if spacetime is created by virtual particles as an emergent property? What if Gravitational Fields are polarization of spacetime? (Where positive curvature indicates probabilities of positive energy/mass virtual particles are higher in that region and negative curvature indicates probabilities of negative energy/mass virtual particles are higher in that region.)

In case of WHIM, imagine Dark Matter particles increase probabilities of positive energy/mass virtual particles and we observe it as hot gas.

Imagine any (+/-) unbalanced probabilities for virtual particles, on the path of light rays, act like different gas mediums that change the local refractive index, so the light rays bend.

And in case of BHs, imagine probabilities of positive energy/mass virtual particles increase so much nearby, some of those particles turn real, that we could observe as Hawking Radiation.

I just realized if my ideas about true nature of spacetime and gravitational fields (stated above) are correct then it would mean Casimir Force actually can be thought as creating artificial gravity, like in Star Trek for example.

I am guessing if positive spacetime curvature slows down time then negative should speed it up. Then if Casimir Force is creating spacetime curvature, and since we can make it negative in the lab, then we can make time move faster, and it maybe measurable in the lab.

I wonder if we could use sheets of Graphene like Casimir Plates and stack them as countless layers to create a multiplied Casimir Force generator. Then we could also add a strong electric and/or magnetic field to amplify that force. Would a device like that could create human weight level strong artificial gravity field?

Imagine you made bricks of artificial gravity generators.
Imagine a spaceship (or spacestation) with a single floor of those bricks. Imagine the crew walks on top and bottom of that single floor (upside-down to each other). So you have a kind of symmetric (up-down) 2 floor internal spaceship design.

Also what if those brick can also create artificial anti-gravity?
(Wikipedia says we can generate both attracting or repelling Casimir Force.) If that is possible, imagine each floor of spaceship is 2 layer of bricks. Top layer generates gravity, bottom layer generates anti-gravity. People on top feels downward force of gravity but people on the lower floor does not feel upward force of gravity, because the anti-gravity layer (which they are closest) cancels out total gravity to zero for them.

I wonder what would happen if we somehow created artificial gravity in front of a spaceship and artificial anti gravity in the back? Could that cause the spaceship to move forward faster and faster, like keep falling in a gravity well?

If we can create artificial anti-gravity, I think it could be also useful as a shield in space, against space dust etc.

What if Planck particle is the smallest and Dark Matter particle is the biggest size/energy particle of the Universe?

Unpublished additional comments:

If we can create positive and negative artificial gravity (using Casimir Force), and put them side by side to create movement, then what if we do it with a rotor of an electricity generator? (+- Casimir Force could be generated using multiple layers of Graphene sheets as Casimir Plates, and maybe amplified with a max strong permanent magnet.) And if that worked, would it mean creating free energy from spacetime itself (Zero-Point Energy)?

20171012

Equivalence Principle

Why inertial and gravitational mass is always equal?

Assume Newton's second law (F=m*a) is true.
Assume we used a weighing scale to measure the gravitational mass of an object on the surface of Earth. A weighing scale actually measures force. But since we know (free fall) acceleration is the same for all objects on the surface of Earth, we can calculate gravitational mass of the object as:
m=F/a

Now imagine a thought experiment:

What if gravity of Earth instantly switched to anti-gravity (but with same magnitude as before)?
Then the object would start accelerating away from Earth. What if we try to calculate inertial mass of the object by measuring its acceleration? Realize the magnitude of that acceleration would be still the same for all objects, but with reverse sign, since direction of acceleration is reversed. Then we have:
m=(-F)/(-a)=F/a

We assumed that magnitude of gravitational acceleration is the same for all objects. Because a=F/m and F=G*M*m/d^2 then a=G*M/d^2 for all objects on the surface of Earth (M: Earth mass; m: Object mass).

So Newton's second law, combined with Newton's Law of Gravity, lead to inertial and gravitational mass always being equal. Then to prove Equivalence Principle, we would need to prove Newton's laws first.

Newton's Law of Gravity (F=G*M*m/d^2) works the same way as Coulomb's Law (F=k*Q*q/d^2) which describes static electric force which is a Quantum Force. Isn't that mean Newton's Law of Gravity can be explained with Quantum Mechanics, or at least it is compatible with QM?

Newton's second law can be explained with QM?

https://en.wikipedia.org/wiki/Equivalence_principle
https://en.wikipedia.org/wiki/Mass#Inertial_vs._gravitational_mass
https://en.wikipedia.org/wiki/Newton%27s_law_of_universal_gravitation
https://en.wikipedia.org/wiki/Coulomb's_law

20171008

The Quest For Ultimate Game Between Humans And Computers


I think Game Theory is one of the main branches of Computer Science.
A lot is known about theoretical and practical complexity of common games like Chess, Go, Checkers, Backgammon, Poker and their so many possible variants. Like how hard they are for classical (and quantum?) computers in basic brute force search view point, or in multiple general smart search algorithms view points, or in best known customized search points of view.

In recent years there were multiple big game matches between human grand masters and classical computer software (set of algorithms) running on various types of computers, with different processing speed, number of processors, number of processing cores, memory size and speed. First I heard was a then-world-champion human lost to a (classical) computer in Chess. Later I heard about a human grand master lost to a (classical) computer in Go.

One may think humans eventually will lose against classical computers in any given game, and for against quantum computers (which are much more powerful) humans will never have any chance.
But if we look at the current situation closer I think it is still unclear.

Are those famous Chess and Go matches between human grand masters and classical computers were really fair for both sides?
I think not. In both cases both software analyzed countless historical matches and became expert on every move of those matches.
Which human grand masters have such knowledge/experience and would be able to recall any of them at any moment in a game they are playing? Can there be any more fair way?

What if a Chess/Go software (intentionally) started the game matches, with having no knowledge of any other past game matches other than its own (games it played against itself)? And also isn't it obvious a human grand master would best recall the games he/she played himself/herself in the past? Wouldn't a Chess/Go match between a human grand master and a computer be much more fair with such a constraint for the computer side?

Can we make game matches between humans and (classical) computers even more fair?

I think humans lost at Chess first, because number of possible future moves does not increase (exponentially) fast enough, so a classical computer of today is able to handle it well.
In Go however, number of possible future moves does increase (exponentially) fast enough. The computer software used a deep learning ANN software, instead of relying on its ability to check so many possible future moves. So unlike in Chess, the computer did not have powerful future foresight ability. But is this mean, computers would eventually beat any human at any similar board game, using an ANN and/or future foresight ability?

I think it is possible ANN approach worked successfully for Go because its rules are much simpler than Chess as an example. I don't think there is any evidence (at least not yet) ANN approach would always work against any board game. Also consider board size for chess (8 by 8) is much smaller than Go (19 by 19), which means number of possible future moves does increase much faster for Go, so a (classical) computer cannot handle it.

How about we try combine the strength of Go (against future foresight) with rule complexity of Chess? For example there is a variant of Chess called Double Chess that is played on a 12 by 16 board. I think we could reasonably expect a game match between a human (grand) master and any classical computer (hardware + software), to be much more fair for both players than any past matches. I think because number of possible future moves should increase similar to Go (if not even faster) because of closer board size and usage of multiple kinds of game pieces (which are able to move in different ways). Also consider how many high quality past game examples would be available to learn/memorize for both sides, which I am guessing should not be so many for Double Chess.

So if we used Double Chess for game matches between humans and computers, can we find out the ultimate winner for sure? What if the computer wins again, would that really mean the end for human side for sure?

Assuming we lost again, what if we created an even more complex (for both humans and computers) variant of Chess by using an even larger board? Like if we turned Double Chess into Double Double Chess?
And/or what if we added few of the proposed new chess pieces to the game? Could then we really create a board game that no classical computer (hardware + software) could ever beat a human master player?

Why this is important?
Because I think the question actually goes far beyond deciding the final outcome of a friendly and fair battle between the creators and their creations. What is human brain really? It is an advanced classical computer or a quantum computer or an unknown kind of computer? How human grand masters of Chess/Go play the game compared to computers? Are humans rely only on past knowledge of the game playing and future foresight as much as they can manage?
Or humans have much more advanced algorithms running in their brain compared to computers? I think how a human player decides game moves is definitely similar to how an ANN algorithm does it but it is still beyond that. Think about how we make decisions in our daily lives in our brains every moment. Any given time we have a vast number of possibilities to think about. Do we choose what to think about every moment randomly? If there are certain probabilities (which depends on individual past life experiences), how we make choices between them every moment, again and again, fast. I think most reasonable explanation would be if our brains are, not classical, but quantum computers. (So neurons must be working like qubit registers.)

And if that is really true, it would mean no classical computer (hardware and software) could ever beat a human brain in a fair game.

(Also if human brain is a quantum computer, how about the rest of human body? The possibilities would be Quantum Computer (QC), classical computer (Turing Machine (TM)), Pushdown Automaton (PDA), Finite State Machine (FSM). To decide, I think we could look at (Functional) Computer Models of biological systems. Are they operate like FSM, PDA, TM, QC? Do their algorithms have conditional branches, conditional loops like a program for a TM? Or they always use simple state transitions like a FSM? I don't know much about how those modelling algorithms work; My guess is they are like TM (which would mean human body (except brain) operate like a classical computer.))

https://en.wikipedia.org/wiki/Game_theory
https://en.wikipedia.org/wiki/Computer_chess
https://en.wikipedia.org/wiki/Computer_Go
https://en.wikipedia.org/wiki/List_of_chess_variants
https://en.wikipedia.org/wiki/Double_Chess
https://en.wikipedia.org/wiki/Automata_theory
https://en.wikipedia.org/wiki/Finite-state_machine
https://en.wikipedia.org/wiki/Pushdown_automaton
https://en.wikipedia.org/wiki/Turing_machine
https://en.wikipedia.org/wiki/Quantum_computing
https://en.wikipedia.org/wiki/Modelling_biological_systems

20171007

What If Reality Is A CA QC At Planck Scale?

What If Reality Is A CA QC At Planck Scale?

Can we make any predictions to check if we can, if this idea in the title above is assumed true?

What our experiments and observations tell us at macro scale, where Relativity seems to be ruling, there is no indication of quantization of spacetime nor gravity.
But at micro scale, where Quantum Mechanics seems to be ruling, it seems all units are quantized (discrete) in terms of Planck units.
So Quantum Mechanics seems directly compatible and I think Relativity is not directly compatible but indirectly compatible, if Relativity is assumed to be an emergent property.
(For example, simple CA used for fluid simulation which are discrete in micro scale, but create a seemingly continuous wold of classical fluid mechanics (Navier-Stokes Equations).)

If our reality is really created by a (as structurally and also as cell state values always discrete) CA QC operating at Planck scale then I would think:

Any time duration divided by Planck Time must be always an integer.

Any length divided by Planck Length must be always an integer.

Compton Wavelength of any quantum particle divided by Planck Length must be always an integer.

De Broglie Wavelength of any quantum particle divided by Planck Length must be always an integer.

If minimum possible particle energy (unit particle energy) is the energy of a photon that has wavelength equal to Planck Length,
then (Compton Wavelength of any quantum particle divided by Planck Length) must be how many units of particle energy that particle is made of.
(If so then, if there is any mathematical order in masses of elementary particles, then maybe it must be searched after
converting their Compton Wavelengths to integers (by dividing each with Planck Length)?)
(Also energy of a Planck particle (in a BH) must be max energy density possible in the universe?
(If so then energy of Planck particle (or its density?) divided by unit particle energy, is how many possible discrete energy levels (total number of states) per Planck cell?))

Also I think since all quantum particles known to be discrete in Planck units (which are known to be smallest possible units of space, time, wavelength, frequency, amplitude, phase, energy, still possibly also mass), is implying (or compatible with) all known (and maybe also unknown) quantum particles could be actually some kinds of quasi-particles (which I think could be described as clusters of state information), created by The Reality CA QC At Planck Scale (TRCAQCAPS? :-).

At least my interpretation of it is that Stephen Wolfram in a lecture had explained neighborhood of a (any) CA is related to its structural dimensions.
From that and I think since we also know our universe/reality is at all scales, seems to be 3 space dimensions plus a time dimension everywhere and when,
we could conclude the CA part of our reality, should have 4 neighbors for each cell in whatever physical arrangement is chosen between the all physical possibilities.
For example, if Von Neumann neighborhood physical arrangement is chosen, it would imply we are talking about a 2D square lattice CA.
Or it could it be each center cell is connected (physically touching) 4 neighbors located around like four vertex corners of a regular tetrahedron.
Are there any other physical cell arrangement possibilities I do not know.

Also I think all physical conservation laws like conservation of energy are implying the CA rules must be always conserving information (stored by the cells).

But what are the full range of possibilities for the internal physical structure/arrangement of the CA cells?

I think first we would need to determine what discrete set of state variables (made of qubit registers each) each CA cell needs to store.
I think if we want the CA to be able to create all quantum particles as quasiparticles then then each cell would need to store all basic internal quantum particle wave free variables as discrete qubit information units.
Assuming each cell is made of a physical arrangement of a total of N individual single qubit storage subcells,
and from what we know about both discrete wave and particle nature of quantum particles, I think it should possible to determine how many qubits at least for each free state variable is needed.

But do we really know for certain, the CA cells would need to store only quantum particle information?

Would not they also need to store discrete state information about local spacetime?
Because it definitely seems spacetime can bend even when it contains no quantum particles, like around any massive object.
Then the question is what spacetime/gravity state information the all CA cells would need to store, also.
Since gravity is bending of spacetime (which would be flat without gravity), and the local bending state (and more) everywhere is described by Einstein Field Equations,
we must look into how many free variables those equations contain,
and how many qubits (at least) would be needed, (to express any possible/real value of spacetime state), to store each of those free variables.

But what if the CA cells do not really need to store spacetime state information?
I had read that equations of Relativity are similar to equations of thermodynamics, which are known to "emerge from the more fundamental field of statistical mechanics".
Yes it seems spacetime can still bend even when it contains no real quantum particles but isn't it always contain virtual particles?
(According to QM, virtual particle pairs, where always one particle has positive and the other has negative energy/mass, pop in and out of existence for extremely short durations, everywhere.)
(I think those pair of virtual particles must be going out of existence by colliding back and so their energies canceling out.)
Realize that what determines bending state of spacetime anywhere is the existence of real quantum particles there.
If there are lots of real quantum particles with positive energy/mass then the spacetime has positive curvature there.
And if there were lots of real quantum particles with negative energy/mass) then the spacetime would have negative curvature there.
What if total curvature state of any spacetime volume is completely determined by the balance (and density) of positive and negative quantum particles there?
(Meaning, if the spacetime curvature is positive somewhere then it means, if we calculated total positive and negative energy from all real and virtual particles there then we would find positive energy is higher, accordingly. And vice versa, if the spacetime curvature is negative somewhere then it means total negative energy is higher, accordingly.)
What this would mean, where there is a gravitational field but no real (positive energy) particles?
I think it would mean, the number of positive energy virtual particles must be higher than the number of negative energy virtual particles there, any given time.
The consequence of this for the CA cells would be, they would only need to store (positive/negative) quantum particle state information; no spacetime state information.

And if we could really determine exactly how many physical qubits each of the CA cells (at least) would need,
then we could research on physical arrangement possibilities for internal physical structure of the CA cells.

A reader maybe noticed that a big assumption for some of above ideas is physical realism.
Because I think if we don't really need physical realism (plausibility), then how we can hope to make any progress on solving the problem of reality, if it is not physically realist itself? :-)

I think a prediction of this TRCAQCAPS idea is that Black Holes must be made of Planck particles.
(Imagine size (Compton Wavelength) of any quantum particle keeps getting smaller with increasing gravity until finall its Compton Wavelength becomes equal to its Schwarzschild radius.)
I think Hawking Radiation implies BHs have at least a surface entropy, indicating discrete information units/particles in units of Plack area.
I think that could be how a BH would look from observers around, and actual total entropy of a BH could be Event Horizon volume divided by Planck (particle/unit?) volume.

I think if spacetime is disrete at Planck scale, maybe the Holometer experiment could be helpful to prove it someday.

Could a Gravitational Wave detector in space someday find evidence of GW discretization (and therefore spacetime)?

I recently read a news (some links I found referenced below) about a new kind of atomic clock using multiple atoms altogether to get a (linearly/exponentially? (based on number of atoms)) more stable time frequency.
I am guessing (did not fully read all the news about it) it must be done by forcing the atoms (oscillators) into synchronization somehow.
Which brings the question, what is the limit for measuring time durations in terms of resolution?
Atomic Clocks will someday finally reach Planck Time measurement scale (and directly show time is discrete in Planck Time units)?

(On a side note, could we create a chip that contains a 2D/3D grid of analog/digital oscillator circuits, and force them to synchronization somehow to reach an Atomic Clock precision?)

My sincere hope is ideas presented above someday could lead to testable/observable predictions about finding out the true nature of our universe/reality.

https://en.wikipedia.org/wiki/Theory_of_relativity
https://en.wikipedia.org/wiki/Quantum_mechanics
https://en.wikipedia.org/wiki/Cellular_automaton
https://en.wikipedia.org/wiki/Von_Neumann_neighborhood
https://en.wikipedia.org/wiki/Tetrahedron
https://en.wikipedia.org/wiki/Quantum_computing
https://en.wikipedia.org/wiki/Planck_particle
https://en.wikipedia.org/wiki/Holometer
https://en.wikipedia.org/wiki/Atomic_clock
https://www.livescience.com/60612-most-precise-clock-powered-by-strontium-atoms.html
https://www.engadget.com/2017/10/06/researchers-increased-atomic-clock-precision/?sr_source=Twitter
https://www.digitaltrends.com/cool-tech/worlds-most-precise-atomic-clock/

Emergent Property Problem

Emergent properties are everywhere in physics.
Some of the biggest ones:
Chemistry is the emergent property of Quantum Mechanics.
Biology is the emergent property of Chemistry.
Psychology is the emergent property of Biology.
Sociology is the emergent property of Psychology.

I think Quantum Mechanics (and Relativity) is also an emergent property of a Cellular Automaton Quantum Computer (CAQC) operating at Planck scale. If so how we can find out its operation rules?

How about we try to understand the general mathematical problem first?

The problem is this:
We are given the high level (macro scale) rules of an emergent property and asked, what are the low level (micro scale) rules which created those high level rules?
(Also the reverse of this problem is another big problem.)

Could we figure out rules of Quantum Mechanics, only from rules of Chemistry (and vice versa)?

When we try to solve a complex problem, obviously we should try to start with a simpler version of it, whenever possible.

There are many methods for Computational Fluid Dynamics (CFD) simulations. If we were given 2D fluid simulation videos of certain resolution and duration for each different method, could we analyze those videos using a computer software to find out which video is produced by which method? At what resolution and what duration the problem becomes solvable/unsolvable for certain? Moreover, at what resolution and what duration we can or cannot figure out the specific rules for each method?

How about an even simpler version of the problem:
What if we used two-dimensional cellular automaton (2D CA)?
Imagine we run any 2D CA algorithm using X*Y cells and for N time steps to create a grayscale video.
Also imagine, if each grayscale pixel in the video calculated as sum or average of M by M cells, like a tile.
At what video resolution and what video duration, we can or cannot figure out the full rule set of the 2D CA algorithm?

How about an even simpler version of the problem:
What if we used one-dimensional cellular automaton (1D CA)?
Imagine we run any 1D CA algorithm using X cells and for N time steps to create a grayscale video.
Also imagine, if each grayscale pixel in the video calculated as sum or average of M cells, like a tile.
At what video resolution and what video duration, we can or cannot figure out the full rule set of the 1D CA algorithm?

(And the reverse problem is this:
Assume the grayscale video described above for 1D/2D CA, shows the operation of another CA (which is the emergent property).
Given the rule set of any 1D/2D CA, predict the rule set of its emergent property CA for any given tile size.)

Also what if the problem for either direction has a constraint?
For example, what if we already know, the unknown 1D/2D CA we trying to figure out, is a Reversible CA?

https://en.wikipedia.org/wiki/Cellular_automaton
https://en.wikipedia.org/wiki/Elementary_cellular_automaton
https://en.wikipedia.org/wiki/Reversible_cellular_automaton

20170914

POWER OF QUANTUM COMPUTERS

It is clear that when it comes to solving numerical search problems like Integer Factorization, quantum computers allow us to find the solution(s) instantly.
We just setup the problem (multiply two unknown integers and get an unknown integer result, set the unknown result to a result we want) and instantly the input integers become known.
So quantum computers are infinitely more powerful than regular computers for solving numerical search problems.

But we use regular computers also for symbolic calculation.
(CAS (Computer Algebra System) software like Mathematica, Maple etc.) What more quantum computers could provide when it comes to symbolic calculation?

I think they could provide the same benefit as for numerical calculation. Meaning instantly solving symbolic search problems.
Imagine if we could just setup an equation expression string as input, then quantum computer sets the output string (with unknown value) to a general solution expression (known value), if such solution really exists/possible.
For example:
1)
Input string: "a*x^0+b*x^1=0"
String value search problem: "x=?"
Output string: "-a/b"
2)
Input string: "a*x^0+b*x^1+c*x^2=0"
String value search problem: "x=?"
Output string: "(-b+(b^2-4*a*c)^(1/2))/(2*a)"

I think using quantum computers for symbolic calculation should allow us solving many important such problems which we cannot solve with regular computers in a practical time.
I am guessing those would even include some Millenium Prize Problems like finding (all) general solution expressions for Navier-Stokes equations (and proving Riemann Hypothesis?).

I think, assuming we will have a general purpose suitable quantum computer someday, only issue is figuring out exactly how to express and solve symbolic calculation problems like the two examples above.

Let's try to solve the first problem using a quantum computer:
Assuming quantum computer symbolic calculated the solution (expression string E), how we could test it to be correct or not?
How about creating an equation that would be true only if E is a valid solution, which is the input equation itself, then:
"a*E^0+b*E^1=0" or "a+b*E=0"
Then I think the solution algorithm for the quantum computer would be:
Start with unknown values E, a, b.
Calculate a+b*E (not numerical calculation but symbolic expression calculation, using an expression tree).
Set the unknown calculation result to 0.
Unknown string E collapses to the answer: "-a/b"

And if we consider how we could do the symbolic calculation step above using a regular computer, which requires manipulating an expression tree using stack(s), then we need figure out how to create a quantum stack using a quantum computer.
(Imagine a stack that can do any number of push/pop operations instantly, to collapse into its final known state instantly.)
(If we could do quantum stacks, then we also could do quantum queues.)
(And then quantum versions of other standard programming data structures would also be possible.)

What could be the most practical way to build a large scale quantum computer?

I think currently building a quantum computer is really hard because our physical world is highly noisy at quantum scale.
Imagine using single atoms/molecules as qubits.
Imagine cooling them close to absolute zero in vacuum environment that needs to be perfectly maintained.

Could there be a better way?

What if we create a quantum computer in a different level of reality, which does not have noise?

Think about our regular digital computers.
Could we think of the bit values in memory of a working regular computer, like a different level of reality of quasiparticles, which does not have noise?

Can we create an extrinsic-semiconductor-based quantum computer chip, that creates and processes qubits as quasiparticles?
(And the quantum computer designed and operated like a Cellular Automata, similar to Wireworld?)

https://en.wikipedia.org/wiki/Quasiparticle
https://en.wikipedia.org/wiki/Electron_hole
https://en.wikipedia.org/wiki/Extrinsic_semiconductor
https://en.wikipedia.org/wiki/Cellular_automaton
https://en.wikipedia.org/wiki/Wireworld

Continuum Hypothesis is False

Continuum hypothesis states "There is no set whose cardinality is strictly between that of the integers and the real numbers".

Resolution:
Express each set in question, as a set of points on (ND) Euclidean space,
and calculate their fractal dimension to compare their cardinality =>

Set of all integers => Fractal Dimension=0
Set of all real numbers => Fractal Dimension=1
Set of all complex numbers => Fractal Dimension=2
Set of all quaternion numbers => Fractal Dimension=4
Set of all octonion numbers => Fractal Dimension=8
Set of all sedenion numbers => Fractal Dimension=16
Set of all points of a certain fractal => Fractal Dimension:
Cantor set: 0.6309
Koch curve: 1.2619
Sierpinski triangle: 1.5849
Sierpinski carpet: 1.8928
Pentaflake: 1.8617
Hexaflake: 1.7712
Hilbert curve: 2

20170905

EXPLAINING DARK ENERGY AND DARK MATTER

If Universe/Reality (U/R) is a Cellular Automata (CA) (Quantum Computer (QC)), operating at Planck Scale (PS), then how it could explain Dark Energy (DE) and Dark Matter (DM)?

Assume Quantum Physics (QP) is its first Macro Scale (MS) Emergent Property (EP), assume Relativity Physics (RP) is its second MS EP,
then Dark (Energy & Matter) Physics (DP) could be its third MS EP!
(Just like for example, Newton (Navier-Stokes) Physics (NP) is the first Macro Scale (MS) Emergent Property (EP) of some CA, like FHP and LBM.)

Is the ratio of DM to Matter (DM/M) is always (everywhere and everywhen) constant in the Universe?
Is the ratio of DE to Vacuum Energy (DE/VE) is always (everywhere and everywhen) constant in the Universe?
(If so, could they be a consequence of DP being what is said above?)

Is every EP has a finite scale range?
(Are fluid simulation CA (like FHP/LBM) have a second layer of EP at super-macro scale (where NP no longer apply)?)

20170816

NATURE OF TIME

Concept of “now” being relative implies unchanging 4D “Block Universe” (so future is predictable) and it comes from Relativity.
But QM says the opposite (future is unpredictable (only there is a certain probability for any future event)).

As we look at the Universe/reality starting at microscale (particle size) and go to macroscale, future events become more and more certain.
For example, think of how certain things you plan to do tomorrow: Can’t we say they are not perfectly certain but close?
But also think of how certain motion of Earth in its orbit tomorrow. Isn’t it much more certain (but still not perfectly certain)?

Future being unpredictable in microscale and later becoming more and more predictable at higher and higher scales also happens in Cellular Automata (which used for fluid simulation).

I think one clear implication of future becoming more and more predictable at higher and higher scales is that, time must be an emergent property.
Which in turn implies spacetime must be an emergent property.
Which in turn implies Relativity must be an emergent property.

I think I had read somewhere that equations of GR is similar to equations of some kind of (non-viscous?) fluid.
If so it would make sense considering Cellular Automata used for fluid simulation shows similar behavior to GR.

I just came across a part of an article from Scientific American September 2015 that says something very similar to what I had said about nature of time:

“Whenever people talk about a dichotomy, though, they usually aim to expose it as false. Indeed, many philosophers think it is meaningless to say whether the universe is deterministic or indeterministic. It can be either, depending on how big or complex your object of study is: particles, atoms, molecules, cells, organisms, minds, communities. “The distinction between determinism and indeterminism is a level-specific distinction,” says Christian List, a philosopher at the London School of Economics and Political Science. “If you have determinism at one particular level, it is fully compatible with indeterminism, both at higher levels and at lower levels.” The atoms in our brain can behave in a completely deterministic way while still giving us freedom of action because atoms and agency operate on different levels. Likewise, Einstein sought a deterministic subquantum level without denying that the quantum level was probabilistic.”

(All my comments above also published here:
http://scienceblogs.com/startswithabang/2017/08/13/comments-of-the-week-172-from-sodium-and-water-to-the-most-dangerous-comet-of-all/)

If the future (time) becomes more and more certain as we go from microscale to macroscale, here is a thought experiment for determining how exactly that happens:
Imagine in a vacuum chamber we dropped a single neutral Carbon atom from a certain height so many times and measured/determined how close it will hit the center of the target (circular) area with how much probability. And later we repeated the experiment with C60 molecules. And later we repeated the experiment with solid balls of 60 C60 molecules. And later we repeated the experiment with solid balls of 3600 C60 molecules. ...
I think what would happen is bigger and bigger solid balls would hit closer and closer to the center with higher and higher probabilities. And general graph (an exponential curve?) of the results would tell us how exactly future (time) becomes more and more certain.

A more advanced version of the thought experiment could be this:
Imagine we started the experiment with micro balls and with a very small drop height. And as the radius of the solid balls gets bigger and bigger, we increased the drop distance with the same size increase ratio as radius.

20170807

FUTURE OF PHYSICS

If we look at history of physics, is there a clear trend to allow us to guess its future?

What are the major milestones in physics history?
I think it could be said:
1) Ancient Greece (level) Physics
2) Galileo (level) Physics
3) Newton (level) Physics
4) Einstein (level) Physics
5) TOE (level) Physics(?)

I think there is indeed a clear trend if you think about it.
Each new revolution in physics brings something like an order of magnitude increase in complexity of math (calculations), not just a new theory.
So I would guess doing calculations to solve physics problems using TOE will be practically impossible using pen and paper only.
I think it will require a (quantum) computer.
(Realize that all physics problems (where answer is possible) can be solved today using non-quantum (super) computers/calculators/pen&paper.)

I think if Universe (or Reality) turns out to be a Cellular Automata design running on an ND matrix qubit (register) quantum computer (with Planck scale cells)
then it would fit into above guess about future of physics (TOE) perfectly.

20170731

Physics Of Star Trek

I saw maybe all Star Trek TV show episodes and movies.
Below I will try to provide more plausible ways of realizing similar technologies according to known laws of physics of our Universe.
I do not know if similar explanations were provided by anyone before.

Super Energy Sources:
They could be portable fusion reactors which are almost perfectly efficient.
They could provide continuous power (similar to DC) or as repeating pulses (similar to AC).
There maybe super batteries that store a dense cloud of electron gas in vacuum (or as a BEC?)?

Stun guns:
Imagine a super powerful gun creates conductive paths in air using UV pulse/continuous lasers, momentarily.
It sends a powerful electroshock to the target from those conductive paths.
(I think this tech is already developing currently.)

Teleportation:
Imagine two teleportation machines (chambers).
The sender machine creates some kind of quantum shock wave that instantly destroys the target object into gamma photons that carry the same quantum information.
That information sent to the receiver machine which has a giant BEC (that is made of same kind of atoms/molecules with same proportions as the target object?).
When the information is applied to the BEC (instantly, like a quantum shock wave), it somehow instantly quantum mechanically collapses into an exact copy of the object.

Phasers:
Instantly destroys the target object using similar quantum shock wave that used in teleportation.
(Target object instantly gets destroyed similar to teleportation, but there is no receiver for its quantum information.)

Artificial Gravity:
Imagine if we had small coils that can create high level positive/negative spacetime curvatures around them (spherical/cylindrical).
We could place a grid of those coils under floors etc to create artificial gravity.

Force Fields:
Imagine if we created spherical/cylindrical spaceships that covered by a dense grid of (+/-) gravity coils,
and also a dense grid of (superconductor) coils that can create (+/-) electric/magnetic fields.
Would not be possible to use them to create "force fields" all around the spaceships to deflect any (atom/particle/photon) kind of attack?

Cloaking Fields:
Imagine if we created spherical/cylindrical spaceships that covered by a dense grid of (+/-) gravity coils.
Would not be possible to use them to create a photon deflection field all around the spaceships?

Warp Speed:
Imagine if we created spherical/cylindrical spaceships that covered by a dense grid of (+/-) gravity coils.
Would not be possible to use them to create a warp bubble all around the spaceships to act like an Alcubierre Drive?

Sub-space Communication:
(Since we assume we have ability to manipulate the curvature of spacetime)
Imagine we have tech to create micro worm holes as twins and able to trap them indefinitely.
A communication signal enters to either one and instantly comes out of the other one.
Each time we create a new set of twin micro worm holes, we keep one in a central hub on Earth,
and the other carried by a spaceship or placed on a different planet/moon/space station.
(The same tech could also be useful to create and trap micro Black Holes, which maybe useful as compact batteries.)

Electronic Dampening Field:
Imagine EMP created like a standing wave using a grid of phased array EMP generators.

Spaceships with hulls that can withstand against almost any kind of attacks at least for a while if necessary:
How about metallic hydrogen or another solid material that we created using ultrapressure (and temperature)?

I think it is also clear that Star Trek Physics require devices with ability to create strong positive and negative spacetime curvatures for sure.
How could it work according to laws and limitations of known physics, assuming they are always must be obeyed?

According to General Relativity, spacetime bends in the presence of positive or negative mass/energy(/pressure/acceleration).

What if we destroyed a small amount of matter/antimatter in a spot (as pulses)?

(Could there be an economical way to create as much as antimatter as we need? Think about how we could easily induce a permanent magnet to permanently switch its N and S sides, by momentarily creating a strong enough reverse magnetic field using an electromagnet.
Could there be any way to create a special quantum field/shockwave (using an electric and/or magnetic field generator or a laser?)
that when it passes thru a sample of matter (trapped in mid-vacuum), it induces that matter to instantly switch to antimatter (so that instantly all electrons switch to positrons, all protons to anti-protons, all neutrons to anti-neutrons)?)

What if we created an arbitrarily strong volume/spot of magnetic and/or electric field(s)?

What if we created a spot of ultrapressure using a tech way beyond any diamond anvil?

What if we created a spot of negative ultrapressure (by using pulling force)?
(Imagine if we had or created a (solid?) material that is ultrastrong against pulling force (even for a moment)?)

What if we had or created an ultrastrong (solid?) disk/sphere/ring and trapped it in mid-vacuum.
Later we created an ultrapowerful rotational force on it (even for a moment) using ultrapowerful magnetic field.
So that the object gained (even for a moment) an ultrahigh speed and/or positive/negative acceleration?

20170730

3D VOLUME SCANNER IDEA

I recently learned about an innovative method to get 3D scans of objects. It overcomes line of sight problem and captures the inner shape of the object also. It looks like a robot arm dips the object into water in different orientations. Each time how water level changed over time gets measured and from these measurements 3d object shape is calculated like a CAT scan.

I think these method can be improved upon greatly as follows:

Imagine we put a tight metal wire ring around the object we want to scan, maybe using a separate machine.
It could be a bendable but rigid, steel wire ring, or maybe bicycle wire ring, could be even a suitable kind of plastic.
The object could be in any orientation, hold tight by the ring.

Imagine we have an aquarium tank filled with liquid mercury
(which would keep the object dry unlike water, and also tank walls so that measurements would be more precise).
(Also mercury is conductive which would also make measurements easier using electronic sensor(s).)
(It could also be a cylindrical tank.)

Imagine inside of the tank we have a vertical bar that can move up and down a horizontal bar using electronic control.
Imagine that horizontal bar at its middle (down side) has a hook/lock for the wire ring (around the object).
That hook/lock has an electronically controlled motor that can rotate the wire ring (so the object) to any (vertical) angle.
(To prevent the ring/object moving like a pendulum when it is dipped into liquid (fast) each time, we could add a second horizontal bar with adjustable height, that has a hook/lock for the wire ring at its middle (up side). So the ring would be hold in place from its top and bottom points by two horizontal bars.)

Now imagine to take new measurements each time, we rotate the object a small and equal angular amount (within 360 degrees).
Then we dip the object fully inside the liquid (at constant speed) and take it out fully back (at constant speed).
Every time as we dip the object we record the changes in the liquid level in the tank over time.
(While the object fully dipped we could rotate it again and then record liquid level changes while we take the object fully out back
to get two sets of measurements at each cycle, instead of one.)

Of course mercury is highly toxic and reacts with some metals.
So it would be best to find a better liquid.
The liquid would need to be non-stick to keep scanned objects, tank walls dry. Minimal viscosity and density as possible, maximal temperature range with linear volume change based on temperature, constant volume under common different air pressures would be better. Stable (non-chemically active) and non-toxic are must.
Also electric conductivity would be a plus.

References:
https://www.sciencedaily.com/releases/2017/07/170721131954.htm
http://www.fabbaloo.com/blog/2017/7/25/water-displacement-3d-scanning-will-this-work
https://3dprintingindustry.com/news/3d-scanning-objects-dipping-water-118886/