How we can explain masses of elementary quantum particles?

All elementary quantum particles have energy, some in the form of (rest) mass. Then (rest) mass value of each particle is just 0 or 1.

Then what really needs to be explained is energy distribution (order) of list of elementary quantum particles.

We already know energy of each particle is quantized (discrete) in a Planck unit. (Then energy of each elementary particle is an integer.) And Compton Wavelength of each particle can be seen as its energy/size.

Then what needs to be explained is this:

Imagine we made a (sorted) bar chart of energies of elementary quantum particles. Then, is there a clear order of how energy changes from lowest to highest?

Or what if we made a similar sorted bar chart of particle Compton Wavelengths?

Or what if we made a similar sorted bar chart of particle Compton Frequencies?

Realize that the problem we are trying to solve is a kind of curve fitting problem.

Also realize we are really treating the data as a time series here.

But how do we know really, if our data is a time series?

Also realize that, if we consider the case of sorted bar chart of particle Compton Frequencies, then what we really have is a frequency distribution (not a time series).

Wikipedia says: "The Fourier transform decomposes a function of time (a signal) into the frequencies that make it up"

Then what if, we apply Inverse Fourier Transform to the Compton frequency distribution of elementary quantum particles?

Would not, we get a time series that we could use for curve fitting?

(Also, would not be possible then, that curve we found, could allow us to predict, if there are any smaller or larger elementary particles which we did not discover yet?)

https://en.wikipedia.org/wiki/Fourier_transform

https://en.wikipedia.org/wiki/Curve_fitting

https://en.wikipedia.org/wiki/Time_series

# FB36 Blog

## Saturday, October 21, 2017

## Wednesday, October 18, 2017

### Geometry of Our Universe

The following are my comments recently published at:

http://scienceblogs.com/startswithabang/2017/10/14/ask-ethan-is-the-universe-finite-or-infinite-synopsis/

@Ethan:

“If space were positively curved, like we lived on the surface of a 4D sphere, distant light rays would converge.”

Think of surface of a 3d sphere first:

It is a 2d surface curved in the 3rd dimension.

Now think of surface of a 4d sphere:

It is a 3d surface curved in the 4th dimension.

What if Universe is surface of a 4d sphere where 3d surface (space) curved in the 4th dimension (time)?

So is it really not possible, 3d space we see using our telescopes, could be flat in those 3 dimensions of space, but curved in time dimension?

First let me try to better explain what I mean exactly:

Let’s first simplify the problem:

Assume our universe was 2d, as the surface of a 3d sphere. Now latitude and longitude are our 2 space dimensions. Our distance from the center of the sphere is our time dimension.

Since our universe is the surface of a 3d sphere, it has a general uniform positive curvature, depending on our time coordinate, anytime.

Now the big question is this:

As beings of 2 dimensions now, can we directly measure the global uniform curvature of our universe in any possible way? Or asking the same question in another way would be this: Our universe would look curved or flat to us?

If speed of light was high enough, and if we had an astronomically powerful laser, we could send a beam in any direction, and later see it came back from exact opposite direction, sometime later.

Then we would know for certain our universe if finite.

But I claim, we still would not know what is the general curvature of our universe.

Could we really find/measure it by observing the stars or galaxies around, in our 2d universe?

For answer, first realize we don’t know any poles for our universe. We can use any point in our 2d universe as our North Pole, would it make any difference for coordinates/measurements/observations?

Then why not take our location in our 2d universe as the north pole of our universe.

Now try to imagine all longitude lines coming into our location (the north pole our coordinate system) as the star/galaxy lights.

Can we really see/measure the general curvature of our universe from those light beams coming to us from every direction we can see?

I claim the answer is no.

Why? I claim, as long as we are making all observations and experiments, to calculate the general curvature, using only our space dimensions (latitude and longitude),

we would always find it to be perfectly flat in those 2 dimensions. I also claim, we could calculate the general curvature of our 2d universe (latitude and longitude), only if we include the precise time coordinates in the measurements/experiments, as well as precise latitude and longitude coordinates.

So I really claim, our universe looks flat to us, because we are making all observations/measurements in 3 space dimensions. But if we also include time coordinates, then we can calculate true general curvature of our universe.

And I further claim:

Curvature of circle (1d curved line on 2d space):

1/r

Curvature of sphere (2d curved plane on 3d space):

1/r^2

Curvature of sphere (3d curved space on a 4d space):

1/r^3

So if our universe was 2d space and 1 time (2d curved plane on 3d space):

Its general curvature at any time would be:

1/r^2=1/(c*t)^2 (where c is the speed of light and t time passed since The Big Bang in seconds)

And so if our universe is 3d space and 1 time (3d curved space on 4d space):

Then its general curvature at any time is:

1/r^3=1/(c*t)^3 (where c is the speed of light and t time passed since The Big Bang in seconds)

And I further claim:

If astrophysicists recalculated general curvature of our universe, by including all space and time coordinate information correctly, then they should be able to verify, the calculation results always match to the theoretical value which is 1/(c*t)^3 .

The raw data to use for those calculations would be the pictures of universe, for the same direction, looking at views there from different times.

I realized this value for the current general curvature of our universe (1/(c*t)^3) would be correct only if we ignore the expansion of the universe. To get correct values for any time, we need to use current radius of the universe for that time, including effect of the expansion until that time.

Wikipedia says:

“it is currently unknown whether the observable universe is identical to the global universe”

From what I claimed above, I claim they are identical.

(So if the current radius of observational universe is 46 Bly, then I claim it means current global curvature of our universe is 1/(46 Bly in meters)^3.)

http://scienceblogs.com/startswithabang/2017/10/14/ask-ethan-is-the-universe-finite-or-infinite-synopsis/

@Ethan:

“If space were positively curved, like we lived on the surface of a 4D sphere, distant light rays would converge.”

Think of surface of a 3d sphere first:

It is a 2d surface curved in the 3rd dimension.

Now think of surface of a 4d sphere:

It is a 3d surface curved in the 4th dimension.

What if Universe is surface of a 4d sphere where 3d surface (space) curved in the 4th dimension (time)?

So is it really not possible, 3d space we see using our telescopes, could be flat in those 3 dimensions of space, but curved in time dimension?

First let me try to better explain what I mean exactly:

Let’s first simplify the problem:

Assume our universe was 2d, as the surface of a 3d sphere. Now latitude and longitude are our 2 space dimensions. Our distance from the center of the sphere is our time dimension.

Since our universe is the surface of a 3d sphere, it has a general uniform positive curvature, depending on our time coordinate, anytime.

Now the big question is this:

As beings of 2 dimensions now, can we directly measure the global uniform curvature of our universe in any possible way? Or asking the same question in another way would be this: Our universe would look curved or flat to us?

If speed of light was high enough, and if we had an astronomically powerful laser, we could send a beam in any direction, and later see it came back from exact opposite direction, sometime later.

Then we would know for certain our universe if finite.

But I claim, we still would not know what is the general curvature of our universe.

Could we really find/measure it by observing the stars or galaxies around, in our 2d universe?

For answer, first realize we don’t know any poles for our universe. We can use any point in our 2d universe as our North Pole, would it make any difference for coordinates/measurements/observations?

Then why not take our location in our 2d universe as the north pole of our universe.

Now try to imagine all longitude lines coming into our location (the north pole our coordinate system) as the star/galaxy lights.

Can we really see/measure the general curvature of our universe from those light beams coming to us from every direction we can see?

I claim the answer is no.

Why? I claim, as long as we are making all observations and experiments, to calculate the general curvature, using only our space dimensions (latitude and longitude),

we would always find it to be perfectly flat in those 2 dimensions. I also claim, we could calculate the general curvature of our 2d universe (latitude and longitude), only if we include the precise time coordinates in the measurements/experiments, as well as precise latitude and longitude coordinates.

So I really claim, our universe looks flat to us, because we are making all observations/measurements in 3 space dimensions. But if we also include time coordinates, then we can calculate true general curvature of our universe.

And I further claim:

Curvature of circle (1d curved line on 2d space):

1/r

Curvature of sphere (2d curved plane on 3d space):

1/r^2

Curvature of sphere (3d curved space on a 4d space):

1/r^3

So if our universe was 2d space and 1 time (2d curved plane on 3d space):

Its general curvature at any time would be:

1/r^2=1/(c*t)^2 (where c is the speed of light and t time passed since The Big Bang in seconds)

And so if our universe is 3d space and 1 time (3d curved space on 4d space):

Then its general curvature at any time is:

1/r^3=1/(c*t)^3 (where c is the speed of light and t time passed since The Big Bang in seconds)

And I further claim:

If astrophysicists recalculated general curvature of our universe, by including all space and time coordinate information correctly, then they should be able to verify, the calculation results always match to the theoretical value which is 1/(c*t)^3 .

The raw data to use for those calculations would be the pictures of universe, for the same direction, looking at views there from different times.

I realized this value for the current general curvature of our universe (1/(c*t)^3) would be correct only if we ignore the expansion of the universe. To get correct values for any time, we need to use current radius of the universe for that time, including effect of the expansion until that time.

Wikipedia says:

“it is currently unknown whether the observable universe is identical to the global universe”

From what I claimed above, I claim they are identical.

(So if the current radius of observational universe is 46 Bly, then I claim it means current global curvature of our universe is 1/(46 Bly in meters)^3.)

### Dark Matter and Nature of Gravitational Fields And Spacetime

The following are my comments recently published at:

http://scienceblogs.com/startswithabang/2017/10/10/missing-matter-found-but-doesnt-dent-dark-matter-synopsis/

“Neutral atoms formed when the Universe was a mere 380,000 years old; after hundreds of millions of years, the hot, ultraviolet light from those early stars hits those intergalactic atoms. When it does, those photons get absorbed, kicking the electrons out of their atoms entirely, and creating an intergalactic plasma: the warm-hot intergalactic medium (WHIM).”

So the UV light from earliest stars keeping the intergalactic gas hot (and does it perfectly for all gas atoms somehow).

But how it is possible that UV light photons stayed same after billions of years of expansion of universe?

I have a really crazy idea on this WHIM which maybe a better explanation though:

What if WHIM is no ordinary gas?

What if WHIM is an effect similar to Hawking Radiation?

What if spacetime is created by virtual particles as an emergent property? What if Gravitational Fields are polarization of spacetime? (Where positive curvature indicates probabilities of positive energy/mass virtual particles are higher in that region and negative curvature indicates probabilities of negative energy/mass virtual particles are higher in that region.)

In case of WHIM, imagine Dark Matter particles increase probabilities of positive energy/mass virtual particles and we observe it as hot gas.

Imagine any (+/-) unbalanced probabilities for virtual particles, on the path of light rays, act like different gas mediums that change the local refractive index, so the light rays bend.

And in case of BHs, imagine probabilities of positive energy/mass virtual particles increase so much nearby, some of those particles turn real, that we could observe as Hawking Radiation.

I just realized if my ideas about true nature of spacetime and gravitational fields (stated above) are correct then it would mean Casimir Force actually can be thought as creating artificial gravity, like in Star Trek for example.

I am guessing if positive spacetime curvature slows down time then negative should speed it up. Then if Casimir Force is creating spacetime curvature, and since we can make it negative in the lab, then we can make time move faster, and it maybe measurable in the lab.

I wonder if we could use sheets of Graphene like Casimir Plates and stack them as countless layers to create a multiplied Casimir Force generator. Then we could also add a strong electric and/or magnetic field to amplify that force. Would a device like that could create human weight level strong artificial gravity field?

Imagine you made bricks of artificial gravity generators.

Imagine a spaceship (or spacestation) with a single floor of those bricks. Imagine the crew walks on top and bottom of that single floor (upside-down to each other). So you have a kind of symmetric (up-down) 2 floor internal spaceship design.

Also what if those brick can also create artificial anti-gravity?

(Wikipedia says we can generate both attracting or repelling Casimir Force.) If that is possible, imagine each floor of spaceship is 2 layer of bricks. Top layer generates gravity, bottom layer generates anti-gravity. People on top feels downward force of gravity but people on the lower floor does not feel upward force of gravity, because the anti-gravity layer (which they are closest) cancels out total gravity to zero for them.

I wonder what would happen if we somehow created artificial gravity in front of a spaceship and artificial anti gravity in the back? Could that cause the spaceship to move forward faster and faster, like keep falling in a gravity well?

If we can create artificial anti-gravity, I think it could be also useful as a shield in space, against space dust etc.

What if Planck particle is the smallest and Dark Matter particle is the biggest size/energy particle of the Universe?

Unpublished additional comments:

If we can create positive and negative artificial gravity (using Casimir Force), and put them side by side to create movement, then what if we do it with a rotor of an electricity generator? (+- Casimir Force could be generated using multiple layers of Graphene sheets as Casimir Plates, and maybe amplified with a max strong permanent magnet.) And if that worked, would it mean creating free energy from spacetime itself (Zero-Point Energy)?

http://scienceblogs.com/startswithabang/2017/10/10/missing-matter-found-but-doesnt-dent-dark-matter-synopsis/

“Neutral atoms formed when the Universe was a mere 380,000 years old; after hundreds of millions of years, the hot, ultraviolet light from those early stars hits those intergalactic atoms. When it does, those photons get absorbed, kicking the electrons out of their atoms entirely, and creating an intergalactic plasma: the warm-hot intergalactic medium (WHIM).”

So the UV light from earliest stars keeping the intergalactic gas hot (and does it perfectly for all gas atoms somehow).

But how it is possible that UV light photons stayed same after billions of years of expansion of universe?

I have a really crazy idea on this WHIM which maybe a better explanation though:

What if WHIM is no ordinary gas?

What if WHIM is an effect similar to Hawking Radiation?

What if spacetime is created by virtual particles as an emergent property? What if Gravitational Fields are polarization of spacetime? (Where positive curvature indicates probabilities of positive energy/mass virtual particles are higher in that region and negative curvature indicates probabilities of negative energy/mass virtual particles are higher in that region.)

In case of WHIM, imagine Dark Matter particles increase probabilities of positive energy/mass virtual particles and we observe it as hot gas.

Imagine any (+/-) unbalanced probabilities for virtual particles, on the path of light rays, act like different gas mediums that change the local refractive index, so the light rays bend.

And in case of BHs, imagine probabilities of positive energy/mass virtual particles increase so much nearby, some of those particles turn real, that we could observe as Hawking Radiation.

I just realized if my ideas about true nature of spacetime and gravitational fields (stated above) are correct then it would mean Casimir Force actually can be thought as creating artificial gravity, like in Star Trek for example.

I am guessing if positive spacetime curvature slows down time then negative should speed it up. Then if Casimir Force is creating spacetime curvature, and since we can make it negative in the lab, then we can make time move faster, and it maybe measurable in the lab.

I wonder if we could use sheets of Graphene like Casimir Plates and stack them as countless layers to create a multiplied Casimir Force generator. Then we could also add a strong electric and/or magnetic field to amplify that force. Would a device like that could create human weight level strong artificial gravity field?

Imagine you made bricks of artificial gravity generators.

Imagine a spaceship (or spacestation) with a single floor of those bricks. Imagine the crew walks on top and bottom of that single floor (upside-down to each other). So you have a kind of symmetric (up-down) 2 floor internal spaceship design.

Also what if those brick can also create artificial anti-gravity?

(Wikipedia says we can generate both attracting or repelling Casimir Force.) If that is possible, imagine each floor of spaceship is 2 layer of bricks. Top layer generates gravity, bottom layer generates anti-gravity. People on top feels downward force of gravity but people on the lower floor does not feel upward force of gravity, because the anti-gravity layer (which they are closest) cancels out total gravity to zero for them.

I wonder what would happen if we somehow created artificial gravity in front of a spaceship and artificial anti gravity in the back? Could that cause the spaceship to move forward faster and faster, like keep falling in a gravity well?

If we can create artificial anti-gravity, I think it could be also useful as a shield in space, against space dust etc.

What if Planck particle is the smallest and Dark Matter particle is the biggest size/energy particle of the Universe?

Unpublished additional comments:

If we can create positive and negative artificial gravity (using Casimir Force), and put them side by side to create movement, then what if we do it with a rotor of an electricity generator? (+- Casimir Force could be generated using multiple layers of Graphene sheets as Casimir Plates, and maybe amplified with a max strong permanent magnet.) And if that worked, would it mean creating free energy from spacetime itself (Zero-Point Energy)?

## Thursday, October 12, 2017

### Equivalence Principle

Why inertial and gravitational mass is always equal?

Assume Newton's second law (F=m*a) is true.

Assume we used a weighing scale to measure the gravitational mass of an object on the surface of Earth. A weighing scale actually measures force. But since we know (free fall) acceleration is the same for all objects on the surface of Earth, we can calculate gravitational mass of the object as:

m=F/a

Now imagine a thought experiment:

What if gravity of Earth instantly switched to anti-gravity (but with same magnitude as before)?

Then the object would start accelerating away from Earth. What if we try to calculate inertial mass of the object by measuring its acceleration? Realize the magnitude of that acceleration would be still the same for all objects, but with reverse sign, since direction of acceleration is reversed. Then we have:

m=(-F)/(-a)=F/a

We assumed that magnitude of gravitational acceleration is the same for all objects. Because a=F/m and F=G*M*m/d^2 then a=G*M/d^2 for all objects on the surface of Earth (M: Earth mass; m: Object mass).

So Newton's second law, combined with Newton's Law of Gravity, lead to inertial and gravitational mass always being equal. Then to prove Equivalence Principle, we would need to prove Newton's laws first.

Newton's Law of Gravity (F=G*M*m/d^2) works the same way as Coulomb's Law (F=k*Q*q/d^2) which describes static electric force which is a Quantum Force. Isn't that mean Newton's Law of Gravity can be explained with Quantum Mechanics, or at least it is compatible with QM?

Newton's second law can be explained with QM?

https://en.wikipedia.org/wiki/Equivalence_principle

https://en.wikipedia.org/wiki/Mass#Inertial_vs._gravitational_mass

https://en.wikipedia.org/wiki/Newton%27s_law_of_universal_gravitation

Assume Newton's second law (F=m*a) is true.

Assume we used a weighing scale to measure the gravitational mass of an object on the surface of Earth. A weighing scale actually measures force. But since we know (free fall) acceleration is the same for all objects on the surface of Earth, we can calculate gravitational mass of the object as:

m=F/a

Now imagine a thought experiment:

What if gravity of Earth instantly switched to anti-gravity (but with same magnitude as before)?

Then the object would start accelerating away from Earth. What if we try to calculate inertial mass of the object by measuring its acceleration? Realize the magnitude of that acceleration would be still the same for all objects, but with reverse sign, since direction of acceleration is reversed. Then we have:

m=(-F)/(-a)=F/a

We assumed that magnitude of gravitational acceleration is the same for all objects. Because a=F/m and F=G*M*m/d^2 then a=G*M/d^2 for all objects on the surface of Earth (M: Earth mass; m: Object mass).

So Newton's second law, combined with Newton's Law of Gravity, lead to inertial and gravitational mass always being equal. Then to prove Equivalence Principle, we would need to prove Newton's laws first.

Newton's Law of Gravity (F=G*M*m/d^2) works the same way as Coulomb's Law (F=k*Q*q/d^2) which describes static electric force which is a Quantum Force. Isn't that mean Newton's Law of Gravity can be explained with Quantum Mechanics, or at least it is compatible with QM?

Newton's second law can be explained with QM?

https://en.wikipedia.org/wiki/Equivalence_principle

https://en.wikipedia.org/wiki/Mass#Inertial_vs._gravitational_mass

https://en.wikipedia.org/wiki/Newton%27s_law_of_universal_gravitation

https://en.wikipedia.org/wiki/Coulomb's_law

## Sunday, October 8, 2017

### The Quest For Ultimate Game Between Humans And Computers

I think Game Theory is one of the main branches of Computer Science.

A lot is known about theoretical and practical complexity of common games like Chess, Go, Checkers, Backgammon, Poker and their so many possible variants. Like how hard they are for classical (and quantum?) computers in basic brute force search view point, or in multiple general smart search algorithms view points, or in best known customized search points of view.

In recent years there were multiple big game matches between human grand masters and classical computer software (set of algorithms) running on various types of computers, with different processing speed, number of processors, number of processing cores, memory size and speed. First I heard was a then-world-champion human lost to a (classical) computer in Chess. Later I heard about a human grand master lost to a (classical) computer in Go.

One may think humans eventually will lose against classical computers in any given game, and for against quantum computers (which are much more powerful) humans will never have any chance.

But if we look at the current situation closer I think it is still unclear.

Are those famous Chess and Go matches between human grand masters and classical computers were really fair for both sides?

I think not. In both cases both software analyzed countless historical matches and became expert on every move of those matches.

Which human grand masters have such knowledge/experience and would be able to recall any of them at any moment in a game they are playing? Can there be any more fair way?

What if a Chess/Go software (intentionally) started the game matches, with having no knowledge of any other past game matches other than its own (games it played against itself)? And also isn't it obvious a human grand master would best recall the games he/she played himself/herself in the past? Wouldn't a Chess/Go match between a human grand master and a computer be much more fair with such a constraint for the computer side?

Can we make game matches between humans and (classical) computers even more fair?

I think humans lost at Chess first, because number of possible future moves does not increase (exponentially) fast enough, so a classical computer of today is able to handle it well.

In Go however, number of possible future moves does increase (exponentially) fast enough. The computer software used a deep learning ANN software, instead of relying on its ability to check so many possible future moves. So unlike in Chess, the computer did not have powerful future foresight ability. But is this mean, computers would eventually beat any human at any similar board game, using an ANN and/or future foresight ability?

I think it is possible ANN approach worked successfully for Go because its rules are much simpler than Chess as an example. I don't think there is any evidence (at least not yet) ANN approach would always work against any board game. Also consider board size for chess (8 by 8) is much smaller than Go (19 by 19), which means number of possible future moves does increase much faster for Go, so a (classical) computer cannot handle it.

How about we try combine the strength of Go (against future foresight) with rule complexity of Chess? For example there is a variant of Chess called Double Chess that is played on a 12 by 16 board. I think we could reasonably expect a game match between a human (grand) master and any classical computer (hardware + software), to be much more fair for both players than any past matches. I think because number of possible future moves should increase similar to Go (if not even faster) because of closer board size and usage of multiple kinds of game pieces (which are able to move in different ways). Also consider how many high quality past game examples would be available to learn/memorize for both sides, which I am guessing should not be so many for Double Chess.

So if we used Double Chess for game matches between humans and computers, can we find out the ultimate winner for sure? What if the computer wins again, would that really mean the end for human side for sure?

Assuming we lost again, what if we created an even more complex (for both humans and computers) variant of Chess by using an even larger board? Like if we turned Double Chess into Double Double Chess?

And/or what if we added few of the proposed new chess pieces to the game? Could then we really create a board game that no classical computer (hardware + software) could ever beat a human master player?

Why this is important?

Because I think the question actually goes far beyond deciding the final outcome of a friendly and fair battle between the creators and their creations. What is human brain really? It is an advanced classical computer or a quantum computer or an unknown kind of computer? How human grand masters of Chess/Go play the game compared to computers? Are humans rely only on past knowledge of the game playing and future foresight as much as they can manage?

Or humans have much more advanced algorithms running in their brain compared to computers? I think how a human player decides game moves is definitely similar to how an ANN algorithm does it but it is still beyond that. Think about how we make decisions in our daily lives in our brains every moment. Any given time we have a vast number of possibilities to think about. Do we choose what to think about every moment randomly? If there are certain probabilities (which depends on individual past life experiences), how we make choices between them every moment, again and again, fast. I think most reasonable explanation would be if our brains are, not classical, but quantum computers. (So neurons must be working like qubit registers.)

And if that is really true, it would mean no classical computer (hardware and software) could ever beat a human brain in a fair game.

(Also if human brain is a quantum computer, how about the rest of human body? The possibilities would be Quantum Computer (QC), classical computer (Turing Machine (TM)), Pushdown Automaton (PDA), Finite State Machine (FSM). To decide, I think we could look at (Functional) Computer Models of biological systems. Are they operate like FSM, PDA, TM, QC? Do their algorithms have conditional branches, conditional loops like a program for a TM? Or they always use simple state transitions like a FSM? I don't know much about how those modelling algorithms work; My guess is they are like TM (which would mean human body (except brain) operate like a classical computer.))

https://en.wikipedia.org/wiki/Game_theory

https://en.wikipedia.org/wiki/Computer_chess

https://en.wikipedia.org/wiki/Computer_Go

https://en.wikipedia.org/wiki/List_of_chess_variants

https://en.wikipedia.org/wiki/Double_Chess

https://en.wikipedia.org/wiki/Automata_theory

https://en.wikipedia.org/wiki/Finite-state_machine

https://en.wikipedia.org/wiki/Pushdown_automaton

https://en.wikipedia.org/wiki/Turing_machine

https://en.wikipedia.org/wiki/Quantum_computing

https://en.wikipedia.org/wiki/Modelling_biological_systems

## Saturday, October 7, 2017

### What If Reality Is A CA QC At Planck Scale?

What If Reality Is A CA QC At Planck Scale?

Can we make any predictions to check if we can, if this idea in the title above is assumed true?

What our experiments and observations tell us at macro scale, where Relativity seems to be ruling, there is no indication of quantization of spacetime nor gravity.

But at micro scale, where Quantum Mechanics seems to be ruling, it seems all units are quantized (discrete) in terms of Planck units.

So Quantum Mechanics seems directly compatible and I think Relativity is not directly compatible but indirectly compatible, if Relativity is assumed to be an emergent property.

(For example, simple CA used for fluid simulation which are discrete in micro scale, but create a seemingly continuous wold of classical fluid mechanics (Navier-Stokes Equations).)

If our reality is really created by a (as structurally and also as cell state values always discrete) CA QC operating at Planck scale then I would think:

Any time duration divided by Planck Time must be always an integer.

Any length divided by Planck Length must be always an integer.

Compton Wavelength of any quantum particle divided by Planck Length must be always an integer.

De Broglie Wavelength of any quantum particle divided by Planck Length must be always an integer.

If minimum possible particle energy (unit particle energy) is the energy of a photon that has wavelength equal to Planck Length,

then (Compton Wavelength of any quantum particle divided by Planck Length) must be how many units of particle energy that particle is made of.

(If so then, if there is any mathematical order in masses of elementary particles, then maybe it must be searched after

converting their Compton Wavelengths to integers (by dividing each with Planck Length)?)

(Also energy of a Planck particle (in a BH) must be max energy density possible in the universe?

(If so then energy of Planck particle (or its density?) divided by unit particle energy, is how many possible discrete energy levels (total number of states) per Planck cell?))

Also I think since all quantum particles known to be discrete in Planck units (which are known to be smallest possible units of space, time, wavelength, frequency, amplitude, phase, energy, still possibly also mass), is implying (or compatible with) all known (and maybe also unknown) quantum particles could be actually some kinds of quasi-particles (which I think could be described as clusters of state information), created by The Reality CA QC At Planck Scale (TRCAQCAPS? :-).

At least my interpretation of it is that Stephen Wolfram in a lecture had explained neighborhood of a (any) CA is related to its structural dimensions.

From that and I think since we also know our universe/reality is at all scales, seems to be 3 space dimensions plus a time dimension everywhere and when,

we could conclude the CA part of our reality, should have 4 neighbors for each cell in whatever physical arrangement is chosen between the all physical possibilities.

For example, if Von Neumann neighborhood physical arrangement is chosen, it would imply we are talking about a 2D square lattice CA.

Or it could it be each center cell is connected (physically touching) 4 neighbors located around like four vertex corners of a regular tetrahedron.

Are there any other physical cell arrangement possibilities I do not know.

Also I think all physical conservation laws like conservation of energy are implying the CA rules must be always conserving information (stored by the cells).

But what are the full range of possibilities for the internal physical structure/arrangement of the CA cells?

I think first we would need to determine what discrete set of state variables (made of qubit registers each) each CA cell needs to store.

I think if we want the CA to be able to create all quantum particles as quasiparticles then then each cell would need to store all basic internal quantum particle wave free variables as discrete qubit information units.

Assuming each cell is made of a physical arrangement of a total of N individual single qubit storage subcells,

and from what we know about both discrete wave and particle nature of quantum particles, I think it should possible to determine how many qubits at least for each free state variable is needed.

But do we really know for certain, the CA cells would need to store only quantum particle information?

Would not they also need to store discrete state information about local spacetime?

Because it definitely seems spacetime can bend even when it contains no quantum particles, like around any massive object.

Then the question is what spacetime/gravity state information the all CA cells would need to store, also.

Since gravity is bending of spacetime (which would be flat without gravity), and the local bending state (and more) everywhere is described by Einstein Field Equations,

we must look into how many free variables those equations contain,

and how many qubits (at least) would be needed, (to express any possible/real value of spacetime state), to store each of those free variables.

But what if the CA cells do not really need to store spacetime state information?

I had read that equations of Relativity are similar to equations of thermodynamics, which are known to "emerge from the more fundamental field of statistical mechanics".

Yes it seems spacetime can still bend even when it contains no real quantum particles but isn't it always contain virtual particles?

(According to QM, virtual particle pairs, where always one particle has positive and the other has negative energy/mass, pop in and out of existence for extremely short durations, everywhere.)

(I think those pair of virtual particles must be going out of existence by colliding back and so their energies canceling out.)

Realize that what determines bending state of spacetime anywhere is the existence of real quantum particles there.

If there are lots of real quantum particles with positive energy/mass then the spacetime has positive curvature there.

And if there were lots of real quantum particles with negative energy/mass) then the spacetime would have negative curvature there.

What if total curvature state of any spacetime volume is completely determined by the balance (and density) of positive and negative quantum particles there?

(Meaning, if the spacetime curvature is positive somewhere then it means, if we calculated total positive and negative energy from all real and virtual particles there then we would find positive energy is higher, accordingly. And vice versa, if the spacetime curvature is negative somewhere then it means total negative energy is higher, accordingly.)

What this would mean, where there is a gravitational field but no real (positive energy) particles?

I think it would mean, the number of positive energy virtual particles must be higher than the number of negative energy virtual particles there, any given time.

The consequence of this for the CA cells would be, they would only need to store (positive/negative) quantum particle state information; no spacetime state information.

And if we could really determine exactly how many physical qubits each of the CA cells (at least) would need,

then we could research on physical arrangement possibilities for internal physical structure of the CA cells.

A reader maybe noticed that a big assumption for some of above ideas is physical realism.

Because I think if we don't really need physical realism (plausibility), then how we can hope to make any progress on solving the problem of reality, if it is not physically realist itself? :-)

I think a prediction of this TRCAQCAPS idea is that Black Holes must be made of Planck particles.

(Imagine size (Compton Wavelength) of any quantum particle keeps getting smaller with increasing gravity until finall its Compton Wavelength becomes equal to its Schwarzschild radius.)

I think Hawking Radiation implies BHs have at least a surface entropy, indicating discrete information units/particles in units of Plack area.

I think that could be how a BH would look from observers around, and actual total entropy of a BH could be Event Horizon volume divided by Planck (particle/unit?) volume.

I think if spacetime is disrete at Planck scale, maybe the Holometer experiment could be helpful to prove it someday.

Could a Gravitational Wave detector in space someday find evidence of GW discretization (and therefore spacetime)?

I recently read a news (some links I found referenced below) about a new kind of atomic clock using multiple atoms altogether to get a (linearly/exponentially? (based on number of atoms)) more stable time frequency.

I am guessing (did not fully read all the news about it) it must be done by forcing the atoms (oscillators) into synchronization somehow.

Which brings the question, what is the limit for measuring time durations in terms of resolution?

Atomic Clocks will someday finally reach Planck Time measurement scale (and directly show time is discrete in Planck Time units)?

(On a side note, could we create a chip that contains a 2D/3D grid of analog/digital oscillator circuits, and force them to synchronization somehow to reach an Atomic Clock precision?)

My sincere hope is ideas presented above someday could lead to testable/observable predictions about finding out the true nature of our universe/reality.

https://en.wikipedia.org/wiki/Theory_of_relativity

https://en.wikipedia.org/wiki/Quantum_mechanics

https://en.wikipedia.org/wiki/Cellular_automaton

https://en.wikipedia.org/wiki/Von_Neumann_neighborhood

https://en.wikipedia.org/wiki/Tetrahedron

https://en.wikipedia.org/wiki/Quantum_computing

https://en.wikipedia.org/wiki/Planck_particle

https://en.wikipedia.org/wiki/Holometer

https://en.wikipedia.org/wiki/Atomic_clock

https://www.livescience.com/60612-most-precise-clock-powered-by-strontium-atoms.html

https://www.engadget.com/2017/10/06/researchers-increased-atomic-clock-precision/?sr_source=Twitter

https://www.digitaltrends.com/cool-tech/worlds-most-precise-atomic-clock/

Can we make any predictions to check if we can, if this idea in the title above is assumed true?

What our experiments and observations tell us at macro scale, where Relativity seems to be ruling, there is no indication of quantization of spacetime nor gravity.

But at micro scale, where Quantum Mechanics seems to be ruling, it seems all units are quantized (discrete) in terms of Planck units.

So Quantum Mechanics seems directly compatible and I think Relativity is not directly compatible but indirectly compatible, if Relativity is assumed to be an emergent property.

(For example, simple CA used for fluid simulation which are discrete in micro scale, but create a seemingly continuous wold of classical fluid mechanics (Navier-Stokes Equations).)

If our reality is really created by a (as structurally and also as cell state values always discrete) CA QC operating at Planck scale then I would think:

Any time duration divided by Planck Time must be always an integer.

Any length divided by Planck Length must be always an integer.

Compton Wavelength of any quantum particle divided by Planck Length must be always an integer.

De Broglie Wavelength of any quantum particle divided by Planck Length must be always an integer.

If minimum possible particle energy (unit particle energy) is the energy of a photon that has wavelength equal to Planck Length,

then (Compton Wavelength of any quantum particle divided by Planck Length) must be how many units of particle energy that particle is made of.

(If so then, if there is any mathematical order in masses of elementary particles, then maybe it must be searched after

converting their Compton Wavelengths to integers (by dividing each with Planck Length)?)

(Also energy of a Planck particle (in a BH) must be max energy density possible in the universe?

(If so then energy of Planck particle (or its density?) divided by unit particle energy, is how many possible discrete energy levels (total number of states) per Planck cell?))

Also I think since all quantum particles known to be discrete in Planck units (which are known to be smallest possible units of space, time, wavelength, frequency, amplitude, phase, energy, still possibly also mass), is implying (or compatible with) all known (and maybe also unknown) quantum particles could be actually some kinds of quasi-particles (which I think could be described as clusters of state information), created by The Reality CA QC At Planck Scale (TRCAQCAPS? :-).

At least my interpretation of it is that Stephen Wolfram in a lecture had explained neighborhood of a (any) CA is related to its structural dimensions.

From that and I think since we also know our universe/reality is at all scales, seems to be 3 space dimensions plus a time dimension everywhere and when,

we could conclude the CA part of our reality, should have 4 neighbors for each cell in whatever physical arrangement is chosen between the all physical possibilities.

For example, if Von Neumann neighborhood physical arrangement is chosen, it would imply we are talking about a 2D square lattice CA.

Or it could it be each center cell is connected (physically touching) 4 neighbors located around like four vertex corners of a regular tetrahedron.

Are there any other physical cell arrangement possibilities I do not know.

Also I think all physical conservation laws like conservation of energy are implying the CA rules must be always conserving information (stored by the cells).

But what are the full range of possibilities for the internal physical structure/arrangement of the CA cells?

I think first we would need to determine what discrete set of state variables (made of qubit registers each) each CA cell needs to store.

I think if we want the CA to be able to create all quantum particles as quasiparticles then then each cell would need to store all basic internal quantum particle wave free variables as discrete qubit information units.

Assuming each cell is made of a physical arrangement of a total of N individual single qubit storage subcells,

and from what we know about both discrete wave and particle nature of quantum particles, I think it should possible to determine how many qubits at least for each free state variable is needed.

But do we really know for certain, the CA cells would need to store only quantum particle information?

Would not they also need to store discrete state information about local spacetime?

Because it definitely seems spacetime can bend even when it contains no quantum particles, like around any massive object.

Then the question is what spacetime/gravity state information the all CA cells would need to store, also.

Since gravity is bending of spacetime (which would be flat without gravity), and the local bending state (and more) everywhere is described by Einstein Field Equations,

we must look into how many free variables those equations contain,

and how many qubits (at least) would be needed, (to express any possible/real value of spacetime state), to store each of those free variables.

But what if the CA cells do not really need to store spacetime state information?

I had read that equations of Relativity are similar to equations of thermodynamics, which are known to "emerge from the more fundamental field of statistical mechanics".

Yes it seems spacetime can still bend even when it contains no real quantum particles but isn't it always contain virtual particles?

(According to QM, virtual particle pairs, where always one particle has positive and the other has negative energy/mass, pop in and out of existence for extremely short durations, everywhere.)

(I think those pair of virtual particles must be going out of existence by colliding back and so their energies canceling out.)

Realize that what determines bending state of spacetime anywhere is the existence of real quantum particles there.

If there are lots of real quantum particles with positive energy/mass then the spacetime has positive curvature there.

And if there were lots of real quantum particles with negative energy/mass) then the spacetime would have negative curvature there.

What if total curvature state of any spacetime volume is completely determined by the balance (and density) of positive and negative quantum particles there?

(Meaning, if the spacetime curvature is positive somewhere then it means, if we calculated total positive and negative energy from all real and virtual particles there then we would find positive energy is higher, accordingly. And vice versa, if the spacetime curvature is negative somewhere then it means total negative energy is higher, accordingly.)

What this would mean, where there is a gravitational field but no real (positive energy) particles?

I think it would mean, the number of positive energy virtual particles must be higher than the number of negative energy virtual particles there, any given time.

The consequence of this for the CA cells would be, they would only need to store (positive/negative) quantum particle state information; no spacetime state information.

And if we could really determine exactly how many physical qubits each of the CA cells (at least) would need,

then we could research on physical arrangement possibilities for internal physical structure of the CA cells.

A reader maybe noticed that a big assumption for some of above ideas is physical realism.

Because I think if we don't really need physical realism (plausibility), then how we can hope to make any progress on solving the problem of reality, if it is not physically realist itself? :-)

I think a prediction of this TRCAQCAPS idea is that Black Holes must be made of Planck particles.

(Imagine size (Compton Wavelength) of any quantum particle keeps getting smaller with increasing gravity until finall its Compton Wavelength becomes equal to its Schwarzschild radius.)

I think Hawking Radiation implies BHs have at least a surface entropy, indicating discrete information units/particles in units of Plack area.

I think that could be how a BH would look from observers around, and actual total entropy of a BH could be Event Horizon volume divided by Planck (particle/unit?) volume.

I think if spacetime is disrete at Planck scale, maybe the Holometer experiment could be helpful to prove it someday.

Could a Gravitational Wave detector in space someday find evidence of GW discretization (and therefore spacetime)?

I recently read a news (some links I found referenced below) about a new kind of atomic clock using multiple atoms altogether to get a (linearly/exponentially? (based on number of atoms)) more stable time frequency.

I am guessing (did not fully read all the news about it) it must be done by forcing the atoms (oscillators) into synchronization somehow.

Which brings the question, what is the limit for measuring time durations in terms of resolution?

Atomic Clocks will someday finally reach Planck Time measurement scale (and directly show time is discrete in Planck Time units)?

(On a side note, could we create a chip that contains a 2D/3D grid of analog/digital oscillator circuits, and force them to synchronization somehow to reach an Atomic Clock precision?)

My sincere hope is ideas presented above someday could lead to testable/observable predictions about finding out the true nature of our universe/reality.

https://en.wikipedia.org/wiki/Theory_of_relativity

https://en.wikipedia.org/wiki/Quantum_mechanics

https://en.wikipedia.org/wiki/Cellular_automaton

https://en.wikipedia.org/wiki/Von_Neumann_neighborhood

https://en.wikipedia.org/wiki/Tetrahedron

https://en.wikipedia.org/wiki/Quantum_computing

https://en.wikipedia.org/wiki/Planck_particle

https://en.wikipedia.org/wiki/Holometer

https://en.wikipedia.org/wiki/Atomic_clock

https://www.livescience.com/60612-most-precise-clock-powered-by-strontium-atoms.html

https://www.engadget.com/2017/10/06/researchers-increased-atomic-clock-precision/?sr_source=Twitter

https://www.digitaltrends.com/cool-tech/worlds-most-precise-atomic-clock/

## Friday, October 6, 2017

### Emergent Property Problem

Emergent properties are everywhere in physics.

Some of the biggest ones:

Chemistry is the emergent property of Quantum Mechanics.

Biology is the emergent property of Chemistry.

Psychology is the emergent property of Biology.

Sociology is the emergent property of Psychology.

I think Quantum Mechanics (and Relativity) is also an emergent property of a Cellular Automaton Quantum Computer (CAQC) operating at Planck scale. If so how we can find out its operation rules?

How about we try to understand the general mathematical problem first?

The problem is this:

We are given the high level (macro scale) rules of an emergent property and asked, what are the low level (micro scale) rules which created those high level rules?

(Also the reverse of this problem is another big problem.)

Could we figure out rules of Quantum Mechanics, only from rules of Chemistry (and vice versa)?

When we try to solve a complex problem, obviously we should try to start with a simpler version of it, whenever possible.

There are many methods for Computational Fluid Dynamics (CFD) simulations. If we were given 2D fluid simulation videos of certain resolution and duration for each different method, could we analyze those videos using a computer software to find out which video is produced by which method? At what resolution and what duration the problem becomes solvable/unsolvable for certain? Moreover, at what resolution and what duration we can or cannot figure out the specific rules for each method?

How about an even simpler version of the problem:

What if we used two-dimensional cellular automaton (2D CA)?

Imagine we run any 2D CA algorithm using X*Y cells and for N time steps to create a grayscale video.

Also imagine, if each grayscale pixel in the video calculated as sum or average of M by M cells, like a tile.

At what video resolution and what video duration, we can or cannot figure out the full rule set of the 2D CA algorithm?

How about an even simpler version of the problem:

What if we used one-dimensional cellular automaton (1D CA)?

Imagine we run any 1D CA algorithm using X cells and for N time steps to create a grayscale video.

Also imagine, if each grayscale pixel in the video calculated as sum or average of M cells, like a tile.

At what video resolution and what video duration, we can or cannot figure out the full rule set of the 1D CA algorithm?

(And the reverse problem is this:

Assume the grayscale video described above for 1D/2D CA, shows the operation of another CA (which is the emergent property).

Given the rule set of any 1D/2D CA, predict the rule set of its emergent property CA for any given tile size.)

Also what if the problem for either direction has a constraint?

For example, what if we already know, the unknown 1D/2D CA we trying to figure out, is a Reversible CA?

https://en.wikipedia.org/wiki/Cellular_automaton

https://en.wikipedia.org/wiki/Elementary_cellular_automaton

https://en.wikipedia.org/wiki/Reversible_cellular_automaton

Some of the biggest ones:

Chemistry is the emergent property of Quantum Mechanics.

Biology is the emergent property of Chemistry.

Psychology is the emergent property of Biology.

Sociology is the emergent property of Psychology.

I think Quantum Mechanics (and Relativity) is also an emergent property of a Cellular Automaton Quantum Computer (CAQC) operating at Planck scale. If so how we can find out its operation rules?

How about we try to understand the general mathematical problem first?

The problem is this:

We are given the high level (macro scale) rules of an emergent property and asked, what are the low level (micro scale) rules which created those high level rules?

(Also the reverse of this problem is another big problem.)

Could we figure out rules of Quantum Mechanics, only from rules of Chemistry (and vice versa)?

When we try to solve a complex problem, obviously we should try to start with a simpler version of it, whenever possible.

There are many methods for Computational Fluid Dynamics (CFD) simulations. If we were given 2D fluid simulation videos of certain resolution and duration for each different method, could we analyze those videos using a computer software to find out which video is produced by which method? At what resolution and what duration the problem becomes solvable/unsolvable for certain? Moreover, at what resolution and what duration we can or cannot figure out the specific rules for each method?

How about an even simpler version of the problem:

What if we used two-dimensional cellular automaton (2D CA)?

Imagine we run any 2D CA algorithm using X*Y cells and for N time steps to create a grayscale video.

Also imagine, if each grayscale pixel in the video calculated as sum or average of M by M cells, like a tile.

At what video resolution and what video duration, we can or cannot figure out the full rule set of the 2D CA algorithm?

How about an even simpler version of the problem:

What if we used one-dimensional cellular automaton (1D CA)?

Imagine we run any 1D CA algorithm using X cells and for N time steps to create a grayscale video.

Also imagine, if each grayscale pixel in the video calculated as sum or average of M cells, like a tile.

At what video resolution and what video duration, we can or cannot figure out the full rule set of the 1D CA algorithm?

(And the reverse problem is this:

Assume the grayscale video described above for 1D/2D CA, shows the operation of another CA (which is the emergent property).

Given the rule set of any 1D/2D CA, predict the rule set of its emergent property CA for any given tile size.)

Also what if the problem for either direction has a constraint?

For example, what if we already know, the unknown 1D/2D CA we trying to figure out, is a Reversible CA?

https://en.wikipedia.org/wiki/Cellular_automaton

https://en.wikipedia.org/wiki/Elementary_cellular_automaton

https://en.wikipedia.org/wiki/Reversible_cellular_automaton

Subscribe to:
Posts (Atom)