20171028

Production Of Quantum Particles

How quantum particles maybe produced by our Universe?

Assume our reality is created by a CA QC operating at Planck Scale.
Assume it creates a Planck Scale Particle based fluid medium, just like LBM (CA) creates 2d/3d fluid simulation.
Assume, when that fluid medium starts boiling, it creates bubbles (which are its quasiparticles).
And since the cells of the CA QC are qubit (register(s)) based, those bubbles/quasiparticles have quantum properties.

So assume, our universal fluid medium, creates bubbles/quasiparticles (quantum particles),
as (positive/negative energy) virtual/real single/pair particle/antiparticle, depending on local conditions.

Assume, our perception of spacetime is created by virtual particles of quantum vacuum.
Assume, gravitational field is polarization of spacetime.
Assume, positive spacetime curvature is actually quantum vacuum producing more positive energy virtual particles than negative.
Assume, negative spacetime curvature is actually quantum vacuum producing more negative energy virtual particles than positive.
(So Casimir Force is actually creating artificial gravity/anti-gravity!)
And if the (positive/negative) curvature is beyond necessary threshold, then a real particle (pair) is produced, instead of a virtual particle (pair).

So we can say:
Amplitude of spacetime curvature decides virtual or real particle (pair) will be produced.
Sign of spacetime curvature decides positive/negative energy/mass particle (pair) will be produced.
Polarization/Rotation/Spin of spacetime curvature (?) decides particle and/or anti-particle will be produced.

Matter And Dark Matter

Assume, in the beginning of The Big Bang, the Universe was a ball of positive energy, in the middle of a medium of negative energy.
Later it started absorbing negative energy and so started expanding.
As its positive energy density dropped below a threshold, DM particles got created near uniformly everywhere. As the Universe continued to expand, DM particles coalesced into filaments of the cosmic web.

The BB also created hydrogen and helium uniformly everywhere.
Later DM filaments provided guidance for matter, stars and galaxies to form. But we must realize this view leads to Baryon Asymmetry Problem!

What if, matter of our Universe got created thru a different mechanism, which is asymmetric?

If we look at our Universe, it looks like matter is coalesced in the central regions of DM filaments/clouds. What if matter is not coalesced, but got created in those central regions of DM clouds?

What if, whenever wherever DM cloud density goes above a certain threshold, particles of Standard Model are created, without their anti-particles? (And then later DM cloud density would drop below the threshold there, like a negative feedback mechanism. And if so that would mean total amount of DM in the Universe must be decreasing over time!)

And what if, DM particles are gravitons with extremely low mass/energy, and so with extremely large size (Compton Wavelength)?
So that maybe why we cannot detect them directly and why they cannot join with each other to create a BH etc. (There maybe a similar rule for them like Pauli Exclusion Principle?)

About Graviton from Wikipedia:

"The analysis of gravitational waves yielded a new upper bound on the mass of gravitons, if gravitons are massive at all. The graviton's Compton wavelength is at least 1.6×10^16 m, or about 1.6 light-years, corresponding to a graviton mass of no more than 7.7×10^-23 eV/c2.[17] This relation between wavelength and energy is calculated with the Planck-Einstein relation, the same formula which relates electromagnetic wavelength to photon energy."

https://en.wikipedia.org/wiki/Dark_energy
https://en.wikipedia.org/wiki/Dark_matter
https://en.wikipedia.org/wiki/Graviton
https://en.wikipedia.org/wiki/Pauli_exclusion_principle
https://en.wikipedia.org/wiki/Pair_production
https://en.wikipedia.org/wiki/Two-photon_physics
https://en.wikipedia.org/wiki/Baryon_asymmetry
https://en.wikipedia.org/wiki/Standard_Model

20171024

Spacetime Curvature And Speed Of Light

What if Gravity is the 5th emergent dimension? (So mass/energy of a particle is its gravity dimension location (+ or -).
(2D surface of a sphere is bent in 3rd dimension. 4D spacetime is bent in the 5th (Gravity) dimension whenever (+ or -) energy/mass is present.)

When a positive spacetime curvature is present, speed of light must slowdown passing from that location. (Just like light slows down when it enters water from air and refracts.)
And if so, then how index of refraction and current speed of light can be calculated for any spacetime location?

Spacetime curvature (which we can calculate) determines deflection angle (which we can also calculate).
Using Snell's Law:
sin(t0)/sin(t1)=v0/v1=n1/n0
Also if c is the speed when there is no curvature.
And if we plug in the values we know/assume then this is what we have:
sin(t0)/sin(t1)=c/v1=n1/n0

We can calculate total bending (deflection) angle of light (in radians) in General Relativity:
deltaPhi=4*G*M/C^2/R (M:Mass in kg; R:Distance from center in meters; C:Speed of light in m/s; G:6.7E-11)

Assume incoming angle of light is 90 degrees (pi/2 radians) (for refraction index=1 because n=c/v and no spacetime curvature in the first medium):
=> 1/sin(t1)=c/v1=n1/1 =>
1/sin(deltaPhi)=c/v1=n1/1 =>
1/sin(4*G*M/c^2/r)=c/v1=n1/1 =>
n1=c/v1=1/sin(4*G*M/c^2/r) (Index of refraction for any spacetime location bending light)
(=> Possible extreme values:1/0=inf or -inf depending on direction of approach;1/1=1;1/-1=-1 => Range: -inf to +inf)
v1=c/n1=c*sin(4*G*M/c^2/r) (current speed of light for any spacetime location bending light)
(=> Possible extreme values:c*0=0;c*1=c;c*(-1)=-c => Range: -inf to +inf)
(Negative c would mean time is flowing backwards there!? c is the flow rate of time (event information flow (perception) rate) anywhere.)
(So it is not possible to make time move faster than c but it can be slowed and its direction maybe changed using negative energy/mass.)
(Light slows down in (positive) gravitational field because it is denser from light point of view. Imagine more positive energy/mass virtual particles on the way.

Gravitational field is actually local polarization of the virtual particle (each with + or - energy/mass) balance at any spacetime location.) If the total net energy is negative then curvature would be negative. Then index of refraction would also be negative.

(The speed of light anywhere is speed of information flow between the CA cells which determine perception of events in Relativity by any observer.)

https://en.wikipedia.org/wiki/Refractive_index
https://en.wikipedia.org/wiki/Snell%27s_law

20171022

Geometry of Our Universe 2

http://scienceblogs.com/startswithabang/2017/10/22/comments-of-the-week-final-edition/

Ethan wrote:
"From Frank on the curvature of the Universe: “What if Universe is surface of a 4d sphere where 3d surface (space) curved in the 4th dimension (time)?”"
"Well, there is curvature in the fourth dimension, but the laws of relativity tell you how the relationship between space and time occur. There’s no wiggle-room or free parameters in there. If you want the Universe to be the surface of a 4D sphere, you need an extra spatial dimension. There are many physics theories that consider exactly that scenario, and they are constrained but not ruled out."

Then what if I propose, gravitational field across the Universe is the fifth dimension (for the Universe to be the surface of a 4D sphere)? (And also think about why it seems gravity is the only fundamental force that effects all dimensions. Couldn't it be because gravity itself is a dimension, so it must be included together with other dimensions (of spacetime) in physics calculations.)

And why it is really important to know general shape/geometry of the Universe?

I think then we can really answer whether observable universe and global universe are the same or not, and if they are the same then we would also know that the Universe is finite in size. (And we could also calculate general curvature of the Universe for anytime, which would help cosmology greatly, no doubt.)

I am guessing currently known variations in CMB map of the Universe, match to the distribution of matter/energy in the observable Universe, only in a general (non-precise) way. I think, if the Universe is really the 3d (space) surface of a 4D sphere, curved in the 4th dimension (time), (with gravity as the 5th dimension), then, we could use CMB map of the Universe as CT scan data, and could calculate 3d/4d matter/energy distribution of the whole Universe from it. And then, if it matches (as a whole) to the matter/energy distribution of our real observational Universe, (which coming from other (non-CMB) observations/calculations), then we could know for sure, whether our observational and global Universes are identical or not. (If not, then by looking at the partial match, maybe we could still deduce how large really is our global Universe.)

Further speculation:

Let's start with, spacetime is 4D (3 space dimensions and a time dimension).
Gravitational curvature at any spacetime point must be a 4D value => 4 more dimensions for the Universe.
If electric field at any spacetime point is a 4D value => 4 more dimensions for the Universe.
If magnetic field at any spacetime point is a 4D value => 4 more dimensions for the Universe.
Then the Universe would have 4+4+4+4=16 dimensions total!
(Then the dimensions of the Universe could be 4 quaternions = 2 octonions = 1 sedenion.)
(But if electric and magnetic fields require 3d + 3d, then the dimensions of the Universe would be 4+4+3+3=14 dimensions!)

20171028:
If our Universe has 16 dimensions and if our reality is created by a CA QC at Planck Scale, then its cell neighborhood maybe like a tesseract or a double-cube (16 vertices). Or if our Universe has 14 dimensions and if our reality is created by a CA QC at Planck Scale, then its cell neighborhood maybe like a Cube-Octahedron Compound or Cube 2-Compound (14 vertices).

(20171104) What if Kaluza–Klein Theory (which unites Relativity and Electromagnetism, using a fifth dimension), is actually correct by taking gravitational field across the universe as the fifth (macro/micro) dimension? (Maybe compatibility with Relativity requires taking it as a macro, and QM requires taking it as a micro dimension? (Which would be fine!?))

(20171115) According to Newton Physics, speed of any object in the Universe always is:
|V|=(Vx^2+Vy^2+Vz^2)^(1/2) or V^2=Vx^2+Vy^2+Vz^2
But according to Special Theory of Relativity, it really is:
C^2=Vx^2+Vy^2+Vz^2+Vt^2 which also means Vt^2=C^2-Vx^2-Vy^2-Vz^2 and so |Vt|=(C^2-Vx^2-Vy^2-Vz^2)^(1/2)
So, if gravitational field across the Universe is actually its 5th (macro) dimension then:
C^2=Vx^2+Vy^2+Vz^2+Vt^2+Vw^2 which also means Vw^2=C^2-Vx^2-Vy^2-Vz^2-Vt^2 and so |Vw|=(C^2-Vx^2-Vy^2-Vz^2-Vt^2)^(1/2)
(Is this the equation to calculate spacetime curvature from 4D velocity in General Relativity?)
(Equivalence Principle says gravity is equivalent to acceleration => Calculate its derivative?)

20171021

Explaining Masses of Elementary Quantum Particles

How we can explain masses of elementary quantum particles?

All elementary quantum particles have energy, some in the form of (rest) mass. Then (rest) mass value of each particle is just 0 or 1.

Then what really needs to be explained is energy distribution (order) of list of elementary quantum particles.

We already know energy of each particle is quantized (discrete) in a Planck unit. (Then energy of each elementary particle is an integer.) And Compton Wavelength of each particle can be seen as its energy/size.

Then what needs to be explained is this:

Imagine we made a (sorted) bar chart of energies of elementary quantum particles. Then, is there a clear order of how energy changes from lowest to highest?

Or what if we made a similar sorted bar chart of particle Compton Wavelengths?

Or what if we made a similar sorted bar chart of particle Compton Frequencies?

Realize that the problem we are trying to solve is a kind of curve fitting problem.

Also realize we are really treating the data as a time series here.
But how do we know really, if our data is a time series?

Also realize that, if we consider the case of sorted bar chart of particle Compton Frequencies, then what we really have is a frequency distribution (not a time series).

Wikipedia says: "The Fourier transform decomposes a function of time (a signal) into the frequencies that make it up"

Then what if, we apply Inverse Fourier Transform to the Compton frequency distribution of elementary quantum particles?

Would not, we get a time series that we could use for curve fitting?

(Also, would not be possible then, that curve we found, could allow us to predict, if there are any smaller or larger elementary particles which we did not discover yet?)

https://en.wikipedia.org/wiki/Fourier_transform
https://en.wikipedia.org/wiki/Curve_fitting
https://en.wikipedia.org/wiki/Time_series

20171018

Geometry of Our Universe

The following are my comments recently published at:
http://scienceblogs.com/startswithabang/2017/10/14/ask-ethan-is-the-universe-finite-or-infinite-synopsis/

@Ethan:
“If space were positively curved, like we lived on the surface of a 4D sphere, distant light rays would converge.”
Think of surface of a 3d sphere first:
It is a 2d surface curved in the 3rd dimension.
Now think of surface of a 4d sphere:
It is a 3d surface curved in the 4th dimension.
What if Universe is surface of a 4d sphere where 3d surface (space) curved in the 4th dimension (time)?
So is it really not possible, 3d space we see using our telescopes, could be flat in those 3 dimensions of space, but curved in time dimension?

First let me try to better explain what I mean exactly:
Let’s first simplify the problem:
Assume our universe was 2d, as the surface of a 3d sphere. Now latitude and longitude are our 2 space dimensions. Our distance from the center of the sphere is our time dimension.

Since our universe is the surface of a 3d sphere, it has a general uniform positive curvature, depending on our time coordinate, anytime.

Now the big question is this:
As beings of 2 dimensions now, can we directly measure the global uniform curvature of our universe in any possible way? Or asking the same question in another way would be this: Our universe would look curved or flat to us?

If speed of light was high enough, and if we had an astronomically powerful laser, we could send a beam in any direction, and later see it came back from exact opposite direction, sometime later.
Then we would know for certain our universe if finite.
But I claim, we still would not know what is the general curvature of our universe.

Could we really find/measure it by observing the stars or galaxies around, in our 2d universe?

For answer, first realize we don’t know any poles for our universe. We can use any point in our 2d universe as our North Pole, would it make any difference for coordinates/measurements/observations?
Then why not take our location in our 2d universe as the north pole of our universe.

Now try to imagine all longitude lines coming into our location (the north pole our coordinate system) as the star/galaxy lights.
Can we really see/measure the general curvature of our universe from those light beams coming to us from every direction we can see?
I claim the answer is no.

Why? I claim, as long as we are making all observations and experiments, to calculate the general curvature, using only our space dimensions (latitude and longitude),
we would always find it to be perfectly flat in those 2 dimensions. I also claim, we could calculate the general curvature of our 2d universe (latitude and longitude), only if we include the precise time coordinates in the measurements/experiments, as well as precise latitude and longitude coordinates.

So I really claim, our universe looks flat to us, because we are making all observations/measurements in 3 space dimensions. But if we also include time coordinates, then we can calculate true general curvature of our universe.

And I further claim:

Curvature of circle (1d curved line on 2d space):
1/r

Curvature of sphere (2d curved plane on 3d space):
1/r^2

Curvature of sphere (3d curved space on a 4d space):
1/r^3

So if our universe was 2d space and 1 time (2d curved plane on 3d space):
Its general curvature at any time would be:
1/r^2=1/(c*t)^2 (where c is the speed of light and t time passed since The Big Bang in seconds)

And so if our universe is 3d space and 1 time (3d curved space on 4d space):
Then its general curvature at any time is:
1/r^3=1/(c*t)^3 (where c is the speed of light and t time passed since The Big Bang in seconds)

And I further claim:

If astrophysicists recalculated general curvature of our universe, by including all space and time coordinate information correctly, then they should be able to verify, the calculation results always match to the theoretical value which is 1/(c*t)^3 .

The raw data to use for those calculations would be the pictures of universe, for the same direction, looking at views there from different times.

I realized this value for the current general curvature of our universe (1/(c*t)^3) would be correct only if we ignore the expansion of the universe. To get correct values for any time, we need to use current radius of the universe for that time, including effect of the expansion until that time.

Wikipedia says:
“it is currently unknown whether the observable universe is identical to the global universe”

From what I claimed above, I claim they are identical.

(So if the current radius of observational universe is 46 Bly, then I claim it means current global curvature of our universe is 1/(46 Bly in meters)^3.)

Dark Matter and Nature of Gravitational Fields And Spacetime

The following are my comments recently published at:
http://scienceblogs.com/startswithabang/2017/10/10/missing-matter-found-but-doesnt-dent-dark-matter-synopsis/

“Neutral atoms formed when the Universe was a mere 380,000 years old; after hundreds of millions of years, the hot, ultraviolet light from those early stars hits those intergalactic atoms. When it does, those photons get absorbed, kicking the electrons out of their atoms entirely, and creating an intergalactic plasma: the warm-hot intergalactic medium (WHIM).”
So the UV light from earliest stars keeping the intergalactic gas hot (and does it perfectly for all gas atoms somehow).
But how it is possible that UV light photons stayed same after billions of years of expansion of universe?
I have a really crazy idea on this WHIM which maybe a better explanation though:
What if WHIM is no ordinary gas?
What if WHIM is an effect similar to Hawking Radiation?

What if spacetime is created by virtual particles as an emergent property? What if Gravitational Fields are polarization of spacetime? (Where positive curvature indicates probabilities of positive energy/mass virtual particles are higher in that region and negative curvature indicates probabilities of negative energy/mass virtual particles are higher in that region.)

In case of WHIM, imagine Dark Matter particles increase probabilities of positive energy/mass virtual particles and we observe it as hot gas.

Imagine any (+/-) unbalanced probabilities for virtual particles, on the path of light rays, act like different gas mediums that change the local refractive index, so the light rays bend.

And in case of BHs, imagine probabilities of positive energy/mass virtual particles increase so much nearby, some of those particles turn real, that we could observe as Hawking Radiation.

I just realized if my ideas about true nature of spacetime and gravitational fields (stated above) are correct then it would mean Casimir Force actually can be thought as creating artificial gravity, like in Star Trek for example.

I am guessing if positive spacetime curvature slows down time then negative should speed it up. Then if Casimir Force is creating spacetime curvature, and since we can make it negative in the lab, then we can make time move faster, and it maybe measurable in the lab.

I wonder if we could use sheets of Graphene like Casimir Plates and stack them as countless layers to create a multiplied Casimir Force generator. Then we could also add a strong electric and/or magnetic field to amplify that force. Would a device like that could create human weight level strong artificial gravity field?

Imagine you made bricks of artificial gravity generators.
Imagine a spaceship (or spacestation) with a single floor of those bricks. Imagine the crew walks on top and bottom of that single floor (upside-down to each other). So you have a kind of symmetric (up-down) 2 floor internal spaceship design.

Also what if those brick can also create artificial anti-gravity?
(Wikipedia says we can generate both attracting or repelling Casimir Force.) If that is possible, imagine each floor of spaceship is 2 layer of bricks. Top layer generates gravity, bottom layer generates anti-gravity. People on top feels downward force of gravity but people on the lower floor does not feel upward force of gravity, because the anti-gravity layer (which they are closest) cancels out total gravity to zero for them.

I wonder what would happen if we somehow created artificial gravity in front of a spaceship and artificial anti gravity in the back? Could that cause the spaceship to move forward faster and faster, like keep falling in a gravity well?

If we can create artificial anti-gravity, I think it could be also useful as a shield in space, against space dust etc.

What if Planck particle is the smallest and Dark Matter particle is the biggest size/energy particle of the Universe?

Unpublished additional comments:

If we can create positive and negative artificial gravity (using Casimir Force), and put them side by side to create movement, then what if we do it with a rotor of an electricity generator? (+- Casimir Force could be generated using multiple layers of Graphene sheets as Casimir Plates, and maybe amplified with a max strong permanent magnet.) And if that worked, would it mean creating free energy from spacetime itself (Zero-Point Energy)?

20171012

Equivalence Principle

Why inertial and gravitational mass is always equal?

Assume Newton's second law (F=m*a) is true.
Assume we used a weighing scale to measure the gravitational mass of an object on the surface of Earth. A weighing scale actually measures force. But since we know (free fall) acceleration is the same for all objects on the surface of Earth, we can calculate gravitational mass of the object as:
m=F/a

Now imagine a thought experiment:

What if gravity of Earth instantly switched to anti-gravity (but with same magnitude as before)?
Then the object would start accelerating away from Earth. What if we try to calculate inertial mass of the object by measuring its acceleration? Realize the magnitude of that acceleration would be still the same for all objects, but with reverse sign, since direction of acceleration is reversed. Then we have:
m=(-F)/(-a)=F/a

We assumed that magnitude of gravitational acceleration is the same for all objects. Because a=F/m and F=G*M*m/d^2 then a=G*M/d^2 for all objects on the surface of Earth (M: Earth mass; m: Object mass).

So Newton's second law, combined with Newton's Law of Gravity, lead to inertial and gravitational mass always being equal. Then to prove Equivalence Principle, we would need to prove Newton's laws first.

Newton's Law of Gravity (F=G*M*m/d^2) works the same way as Coulomb's Law (F=k*Q*q/d^2) which describes static electric force which is a Quantum Force. Isn't that mean Newton's Law of Gravity can be explained with Quantum Mechanics, or at least it is compatible with QM?

Newton's second law can be explained with QM?

https://en.wikipedia.org/wiki/Equivalence_principle
https://en.wikipedia.org/wiki/Mass#Inertial_vs._gravitational_mass
https://en.wikipedia.org/wiki/Newton%27s_law_of_universal_gravitation
https://en.wikipedia.org/wiki/Coulomb's_law

20171008

The Quest For Ultimate Game Between Humans And Computers


I think Game Theory is one of the main branches of Computer Science.
A lot is known about theoretical and practical complexity of common games like Chess, Go, Checkers, Backgammon, Poker and their so many possible variants. Like how hard they are for classical (and quantum?) computers in basic brute force search view point, or in multiple general smart search algorithms view points, or in best known customized search points of view.

In recent years there were multiple big game matches between human grand masters and classical computer software (set of algorithms) running on various types of computers, with different processing speed, number of processors, number of processing cores, memory size and speed. First I heard was a then-world-champion human lost to a (classical) computer in Chess. Later I heard about a human grand master lost to a (classical) computer in Go.

One may think humans eventually will lose against classical computers in any given game, and for against quantum computers (which are much more powerful) humans will never have any chance.
But if we look at the current situation closer I think it is still unclear.

Are those famous Chess and Go matches between human grand masters and classical computers were really fair for both sides?
I think not. In both cases both software analyzed countless historical matches and became expert on every move of those matches.
Which human grand masters have such knowledge/experience and would be able to recall any of them at any moment in a game they are playing? Can there be any more fair way?

What if a Chess/Go software (intentionally) started the game matches, with having no knowledge of any other past game matches other than its own (games it played against itself)? And also isn't it obvious a human grand master would best recall the games he/she played himself/herself in the past? Wouldn't a Chess/Go match between a human grand master and a computer be much more fair with such a constraint for the computer side?

Can we make game matches between humans and (classical) computers even more fair?

I think humans lost at Chess first, because number of possible future moves does not increase (exponentially) fast enough, so a classical computer of today is able to handle it well.
In Go however, number of possible future moves does increase (exponentially) fast enough. The computer software used a deep learning ANN software, instead of relying on its ability to check so many possible future moves. So unlike in Chess, the computer did not have powerful future foresight ability. But is this mean, computers would eventually beat any human at any similar board game, using an ANN and/or future foresight ability?

I think it is possible ANN approach worked successfully for Go because its rules are much simpler than Chess as an example. I don't think there is any evidence (at least not yet) ANN approach would always work against any board game. Also consider board size for chess (8 by 8) is much smaller than Go (19 by 19), which means number of possible future moves does increase much faster for Go, so a (classical) computer cannot handle it.

How about we try combine the strength of Go (against future foresight) with rule complexity of Chess? For example there is a variant of Chess called Double Chess that is played on a 12 by 16 board. I think we could reasonably expect a game match between a human (grand) master and any classical computer (hardware + software), to be much more fair for both players than any past matches. I think because number of possible future moves should increase similar to Go (if not even faster) because of closer board size and usage of multiple kinds of game pieces (which are able to move in different ways). Also consider how many high quality past game examples would be available to learn/memorize for both sides, which I am guessing should not be so many for Double Chess.

So if we used Double Chess for game matches between humans and computers, can we find out the ultimate winner for sure? What if the computer wins again, would that really mean the end for human side for sure?

Assuming we lost again, what if we created an even more complex (for both humans and computers) variant of Chess by using an even larger board? Like if we turned Double Chess into Double Double Chess?
And/or what if we added few of the proposed new chess pieces to the game? Could then we really create a board game that no classical computer (hardware + software) could ever beat a human master player?

Why this is important?
Because I think the question actually goes far beyond deciding the final outcome of a friendly and fair battle between the creators and their creations. What is human brain really? It is an advanced classical computer or a quantum computer or an unknown kind of computer? How human grand masters of Chess/Go play the game compared to computers? Are humans rely only on past knowledge of the game playing and future foresight as much as they can manage?
Or humans have much more advanced algorithms running in their brain compared to computers? I think how a human player decides game moves is definitely similar to how an ANN algorithm does it but it is still beyond that. Think about how we make decisions in our daily lives in our brains every moment. Any given time we have a vast number of possibilities to think about. Do we choose what to think about every moment randomly? If there are certain probabilities (which depends on individual past life experiences), how we make choices between them every moment, again and again, fast. I think most reasonable explanation would be if our brains are, not classical, but quantum computers. (So neurons must be working like qubit registers.)

And if that is really true, it would mean no classical computer (hardware and software) could ever beat a human brain in a fair game.

(Also if human brain is a quantum computer, how about the rest of human body? The possibilities would be Quantum Computer (QC), classical computer (Turing Machine (TM)), Pushdown Automaton (PDA), Finite State Machine (FSM). To decide, I think we could look at (Functional) Computer Models of biological systems. Are they operate like FSM, PDA, TM, QC? Do their algorithms have conditional branches, conditional loops like a program for a TM? Or they always use simple state transitions like a FSM? I don't know much about how those modelling algorithms work; My guess is they are like TM (which would mean human body (except brain) operate like a classical computer.))

https://en.wikipedia.org/wiki/Game_theory
https://en.wikipedia.org/wiki/Computer_chess
https://en.wikipedia.org/wiki/Computer_Go
https://en.wikipedia.org/wiki/List_of_chess_variants
https://en.wikipedia.org/wiki/Double_Chess
https://en.wikipedia.org/wiki/Automata_theory
https://en.wikipedia.org/wiki/Finite-state_machine
https://en.wikipedia.org/wiki/Pushdown_automaton
https://en.wikipedia.org/wiki/Turing_machine
https://en.wikipedia.org/wiki/Quantum_computing
https://en.wikipedia.org/wiki/Modelling_biological_systems

20171007

What If Reality Is A CA QC At Planck Scale?

What If Reality Is A CA QC At Planck Scale?

Can we make any predictions to check if we can, if this idea in the title above is assumed true?

What our experiments and observations tell us at macro scale, where Relativity seems to be ruling, there is no indication of quantization of spacetime nor gravity.
But at micro scale, where Quantum Mechanics seems to be ruling, it seems all units are quantized (discrete) in terms of Planck units.
So Quantum Mechanics seems directly compatible and I think Relativity is not directly compatible but indirectly compatible, if Relativity is assumed to be an emergent property.
(For example, simple CA used for fluid simulation which are discrete in micro scale, but create a seemingly continuous wold of classical fluid mechanics (Navier-Stokes Equations).)

If our reality is really created by a (as structurally and also as cell state values always discrete) CA QC operating at Planck scale then I would think:

Any time duration divided by Planck Time must be always an integer.

Any length divided by Planck Length must be always an integer.

Compton Wavelength of any quantum particle divided by Planck Length must be always an integer.

De Broglie Wavelength of any quantum particle divided by Planck Length must be always an integer.

If minimum possible particle energy (unit particle energy) is the energy of a photon that has wavelength equal to Planck Length,
then (Compton Wavelength of any quantum particle divided by Planck Length) must be how many units of particle energy that particle is made of.
(If so then, if there is any mathematical order in masses of elementary particles, then maybe it must be searched after
converting their Compton Wavelengths to integers (by dividing each with Planck Length)?)
(Also energy of a Planck particle (in a BH) must be max energy density possible in the universe?
(If so then energy of Planck particle (or its density?) divided by unit particle energy, is how many possible discrete energy levels (total number of states) per Planck cell?))

Also I think since all quantum particles known to be discrete in Planck units (which are known to be smallest possible units of space, time, wavelength, frequency, amplitude, phase, energy, still possibly also mass), is implying (or compatible with) all known (and maybe also unknown) quantum particles could be actually some kinds of quasi-particles (which I think could be described as clusters of state information), created by The Reality CA QC At Planck Scale (TRCAQCAPS? :-).

At least my interpretation of it is that Stephen Wolfram in a lecture had explained neighborhood of a (any) CA is related to its structural dimensions.
From that and I think since we also know our universe/reality is at all scales, seems to be 3 space dimensions plus a time dimension everywhere and when,
we could conclude the CA part of our reality, should have 4 neighbors for each cell in whatever physical arrangement is chosen between the all physical possibilities.
For example, if Von Neumann neighborhood physical arrangement is chosen, it would imply we are talking about a 2D square lattice CA.
Or it could it be each center cell is connected (physically touching) 4 neighbors located around like four vertex corners of a regular tetrahedron.
Are there any other physical cell arrangement possibilities I do not know.

Also I think all physical conservation laws like conservation of energy are implying the CA rules must be always conserving information (stored by the cells).

But what are the full range of possibilities for the internal physical structure/arrangement of the CA cells?

I think first we would need to determine what discrete set of state variables (made of qubit registers each) each CA cell needs to store.
I think if we want the CA to be able to create all quantum particles as quasiparticles then then each cell would need to store all basic internal quantum particle wave free variables as discrete qubit information units.
Assuming each cell is made of a physical arrangement of a total of N individual single qubit storage subcells,
and from what we know about both discrete wave and particle nature of quantum particles, I think it should possible to determine how many qubits at least for each free state variable is needed.

But do we really know for certain, the CA cells would need to store only quantum particle information?

Would not they also need to store discrete state information about local spacetime?
Because it definitely seems spacetime can bend even when it contains no quantum particles, like around any massive object.
Then the question is what spacetime/gravity state information the all CA cells would need to store, also.
Since gravity is bending of spacetime (which would be flat without gravity), and the local bending state (and more) everywhere is described by Einstein Field Equations,
we must look into how many free variables those equations contain,
and how many qubits (at least) would be needed, (to express any possible/real value of spacetime state), to store each of those free variables.

But what if the CA cells do not really need to store spacetime state information?
I had read that equations of Relativity are similar to equations of thermodynamics, which are known to "emerge from the more fundamental field of statistical mechanics".
Yes it seems spacetime can still bend even when it contains no real quantum particles but isn't it always contain virtual particles?
(According to QM, virtual particle pairs, where always one particle has positive and the other has negative energy/mass, pop in and out of existence for extremely short durations, everywhere.)
(I think those pair of virtual particles must be going out of existence by colliding back and so their energies canceling out.)
Realize that what determines bending state of spacetime anywhere is the existence of real quantum particles there.
If there are lots of real quantum particles with positive energy/mass then the spacetime has positive curvature there.
And if there were lots of real quantum particles with negative energy/mass) then the spacetime would have negative curvature there.
What if total curvature state of any spacetime volume is completely determined by the balance (and density) of positive and negative quantum particles there?
(Meaning, if the spacetime curvature is positive somewhere then it means, if we calculated total positive and negative energy from all real and virtual particles there then we would find positive energy is higher, accordingly. And vice versa, if the spacetime curvature is negative somewhere then it means total negative energy is higher, accordingly.)
What this would mean, where there is a gravitational field but no real (positive energy) particles?
I think it would mean, the number of positive energy virtual particles must be higher than the number of negative energy virtual particles there, any given time.
The consequence of this for the CA cells would be, they would only need to store (positive/negative) quantum particle state information; no spacetime state information.

And if we could really determine exactly how many physical qubits each of the CA cells (at least) would need,
then we could research on physical arrangement possibilities for internal physical structure of the CA cells.

A reader maybe noticed that a big assumption for some of above ideas is physical realism.
Because I think if we don't really need physical realism (plausibility), then how we can hope to make any progress on solving the problem of reality, if it is not physically realist itself? :-)

I think a prediction of this TRCAQCAPS idea is that Black Holes must be made of Planck particles.
(Imagine size (Compton Wavelength) of any quantum particle keeps getting smaller with increasing gravity until finall its Compton Wavelength becomes equal to its Schwarzschild radius.)
I think Hawking Radiation implies BHs have at least a surface entropy, indicating discrete information units/particles in units of Plack area.
I think that could be how a BH would look from observers around, and actual total entropy of a BH could be Event Horizon volume divided by Planck (particle/unit?) volume.

I think if spacetime is disrete at Planck scale, maybe the Holometer experiment could be helpful to prove it someday.

Could a Gravitational Wave detector in space someday find evidence of GW discretization (and therefore spacetime)?

I recently read a news (some links I found referenced below) about a new kind of atomic clock using multiple atoms altogether to get a (linearly/exponentially? (based on number of atoms)) more stable time frequency.
I am guessing (did not fully read all the news about it) it must be done by forcing the atoms (oscillators) into synchronization somehow.
Which brings the question, what is the limit for measuring time durations in terms of resolution?
Atomic Clocks will someday finally reach Planck Time measurement scale (and directly show time is discrete in Planck Time units)?

(On a side note, could we create a chip that contains a 2D/3D grid of analog/digital oscillator circuits, and force them to synchronization somehow to reach an Atomic Clock precision?)

My sincere hope is ideas presented above someday could lead to testable/observable predictions about finding out the true nature of our universe/reality.

https://en.wikipedia.org/wiki/Theory_of_relativity
https://en.wikipedia.org/wiki/Quantum_mechanics
https://en.wikipedia.org/wiki/Cellular_automaton
https://en.wikipedia.org/wiki/Von_Neumann_neighborhood
https://en.wikipedia.org/wiki/Tetrahedron
https://en.wikipedia.org/wiki/Quantum_computing
https://en.wikipedia.org/wiki/Planck_particle
https://en.wikipedia.org/wiki/Holometer
https://en.wikipedia.org/wiki/Atomic_clock
https://www.livescience.com/60612-most-precise-clock-powered-by-strontium-atoms.html
https://www.engadget.com/2017/10/06/researchers-increased-atomic-clock-precision/?sr_source=Twitter
https://www.digitaltrends.com/cool-tech/worlds-most-precise-atomic-clock/

Emergent Property Problem

Emergent properties are everywhere in physics.
Some of the biggest ones:
Chemistry is the emergent property of Quantum Mechanics.
Biology is the emergent property of Chemistry.
Psychology is the emergent property of Biology.
Sociology is the emergent property of Psychology.

I think Quantum Mechanics (and Relativity) is also an emergent property of a Cellular Automaton Quantum Computer (CAQC) operating at Planck scale. If so how we can find out its operation rules?

How about we try to understand the general mathematical problem first?

The problem is this:
We are given the high level (macro scale) rules of an emergent property and asked, what are the low level (micro scale) rules which created those high level rules?
(Also the reverse of this problem is another big problem.)

Could we figure out rules of Quantum Mechanics, only from rules of Chemistry (and vice versa)?

When we try to solve a complex problem, obviously we should try to start with a simpler version of it, whenever possible.

There are many methods for Computational Fluid Dynamics (CFD) simulations. If we were given 2D fluid simulation videos of certain resolution and duration for each different method, could we analyze those videos using a computer software to find out which video is produced by which method? At what resolution and what duration the problem becomes solvable/unsolvable for certain? Moreover, at what resolution and what duration we can or cannot figure out the specific rules for each method?

How about an even simpler version of the problem:
What if we used two-dimensional cellular automaton (2D CA)?
Imagine we run any 2D CA algorithm using X*Y cells and for N time steps to create a grayscale video.
Also imagine, if each grayscale pixel in the video calculated as sum or average of M by M cells, like a tile.
At what video resolution and what video duration, we can or cannot figure out the full rule set of the 2D CA algorithm?

How about an even simpler version of the problem:
What if we used one-dimensional cellular automaton (1D CA)?
Imagine we run any 1D CA algorithm using X cells and for N time steps to create a grayscale video.
Also imagine, if each grayscale pixel in the video calculated as sum or average of M cells, like a tile.
At what video resolution and what video duration, we can or cannot figure out the full rule set of the 1D CA algorithm?

(And the reverse problem is this:
Assume the grayscale video described above for 1D/2D CA, shows the operation of another CA (which is the emergent property).
Given the rule set of any 1D/2D CA, predict the rule set of its emergent property CA for any given tile size.)

Also what if the problem for either direction has a constraint?
For example, what if we already know, the unknown 1D/2D CA we trying to figure out, is a Reversible CA?

https://en.wikipedia.org/wiki/Cellular_automaton
https://en.wikipedia.org/wiki/Elementary_cellular_automaton
https://en.wikipedia.org/wiki/Reversible_cellular_automaton