Wednesday, November 8, 2017

A GUT OF QUANTUM GRAVITY

What really is spacetime and what really are elementary quantum particles?

Imagine spacetime is an emergent property which is a gas-like medium, created by virtual quantum particles keep popping in and out of existence for extremely short durations, which is also the medium of quantum vacuum. Imagine what flat spacetime is the volume where probabilities for creation of positive and/or negative energy/mass virtual particles are equal. Imagine positive curvature spacetime is the volume where probabilities for creation of positive energy/mass virtual particles are higher. Imagine negative curvature spacetime is the volume where probabilities for creation of negative energy/mass virtual particles are higher. (Realize then spacetime would be really a medium of probability.) Imagine when a region has excess positive energy available, positive energy/mass virtual particles are not just created more but stay in existence longer.
And whenever/wherever the energy is higher than necessary thresholds, virtual particles created as real particles. (And when a region has excess negative energy available instead, then negative energy/mass virtual/real particles are created similarly instead.)

Imagine when light passes thru spacetime regions with different positive/negative curvature, it is like passing thru gas/fluid regions with positive/negative index of refraction.

(So a positive energy/mass particle/object creates a field of positive spacetime curvature around of itself, which we call its gravitational field.)

Realize if gravitational field is polarization of virtual particles, then creating Casimir Force is actually creating artificial spacetime curvature/gravity!

Imagine all elementary quantum particles of Standard Model, which are used to create virtual particles, which create the gas-like spacetime medium, are really quasiparticles of a fluid-like medium, like bubbles created by a boiling fluid. Imagine that fluid-like medium is created by a Cellular Automaton Quantum Computer (CAQC) with Planck length scale cells of qubit registers. Imagine each elementary quantum particle is a like a cluster of information/probability. Probably like a spherical probability wave, traveling in the fluid-like medium created by the CA, maybe similar to CA used for fluid simulation, like LGCA(FHP)/LBM. (Also realize that what happens in CA used for fluid simulation, about predictability of the future (nature of time), is really similar to what happens in our real physical Universe:
In microscale future is unpredictable (particles move randomly), but it becomes more and more predictable with certainty, as we watch it in higher and higher scales. Imagine we just watch/observe that CA world by using bigger and bigger tiles, calculating average particle number/velocity/acceleration for each tile. Then the CA world starts following the rules of classical physics (Navier-Stokes Equation), better and better. Meaning the future becomes better and better predictable, as we observe the CA world in higher and higher scales.
Which is very similar to how future events are unpredictable with certainty in QM scale, compared to how future events are predictable with certainty in Relativity scale. And predictability of future events, is in between those two extremes, in Newton Mechanics (human) scale.)

If what are above are assumed to be true, then it would mean somehow quasiparticles of the Planck-scale medium, are allowed to exist only as a discrete and limited set, which are the elementary quantum particles of Standard Model. (So nothing like soap bubbles, which have a continuous size range, and also have identical/similar nature.)

Also obviously this Planck-scale medium has a limited max signal/information travel speed which we call the speed of light (c). So quantum particles without rest mass always travel at c.
And quantum particles with rest mass travel at lower speeds depending on their rest mass plus kinetic energy. What slows them down I am guessing is the Higgs particle field across our Universe.

So rest mass is like a binary property of elementary quantum particles, with possible values of 0 or 1. So if it is 1 then it creates a drag, moving thru the Higgs Field, because of interaction with it. Then its speed thru the Higgs field depends on its total energy (rest mass/energy plus kinetic energy, which determines size (wavelength) of the particle). And if its total energy is greater, then its size/wavelength is smaller, and it moves faster thru the Higgs field and so thru spacetime.

I think Standard Model is not complete and there are at least two more elementary particles to be discovered. I think one of them is Planck Particle and it must be what Black Holes are made of. I think the other must be the particle of Dark Matter (could it be graviton?).

Based on the ideas above, I think the recent discovery of "hot gas" in DM clouds/filaments, must be because of DM creating a positive spacetime curvature, which means higher probabilities for positive energy/mass virtual particles of quantum vacuum. (So it is a similar phenomenon to Hawking Radiation.)

But why elementary quantum particles have quantum properties/abilities like entanglement? I think it could be because reality is created by a Cellular Automaton Quantum Computer (CAQC) with Planck scale cells. So, since elementary quantum particles of SM are the quasiparticles of this CAQC, they also have quantum properties, since they are clusters of qubit information processed by a (CA) QC.

If gravitational fields are fields of (positive) spacetime curvature, and spacetime is a medium created by virtual particles, then how objects would attract each other? Obviously, a vacuum region with higher probabilities for positive energy/mass virtual particles, must be like a low pressure gas region of spacetime medium. And a vacuum region with higher probabilities for negative energy/mass virtual particles, must be like a high pressure gas region of spacetime medium. (Imagine each particle with positive energy/mass, is a region of positive curvature (of the Planck-scale medium), so when they group together in clusters (objects with mass), then they create a macroscale positive curvature region, like a low pressure gas region of the gas-like spacetime medium.)

Thursday, November 2, 2017

The Table Of Elementary Quantum Particles

I think discovery of the Periodic Table (PT) of chemical elements, allowed accurate prediction of many new/unknown elements and their various properties. (If we are given, only atomic number and mass number of an element, can we accurately predict all its properties (nuclear, chemical, physical, electric, magnetic), using only Quantum Mechanics?) So the set of all chemical elements clearly have a basic (and standard) order (PT)! But there are also many known (and useful) alternative periodic tables (APT). (Isn't there any precise (and unique) mathematical/geometric object/structure/group/graph for the set of all chemical elements other than various table structures?) (And if so, then that object can explain all basic properties of all elements?) So we could say, the order of the set of all chemical elements is not really unique!

Do we really have any true equivalent of PT/APT for elementary quantum particles (of Standard Model)? I think the answer is really no! Because we could not found any clear order for energy/mass of elementary quantum particles, so far!

I think if there is truly no order (can we ever hope to prove that mathematically?), then it could be viewed as a sign of multiverse (or Intelligent Design?)! And if there is an order and its unique, then it could be viewed as a sign of natural inevitability of our reality/universe. My guess is, it will turn out similar to PT/APT situation of the set of all chemical elements (a non-unique order)!

What we can do to find it/them, if really exist?

I think as a first step, we should try to create a basic (and standard) table for elementary quantum particles. It needs to be sorted by particle (rest) energy (since we are trying to explain order of that primarily), and it surely needs to be simplified using Planck Units.

Here is a proposal for a basic (and standard) table of elementary quantum particles:

Column 0: Name/symbol of the elementary particle

Column 1: Compton Wavelength of the elementary particle in Planck Length Units

Column 2: Corresponding Compton Frequency of the elementary particle

Column 3: Is the elementary particle have rest mass?: Y/N

Column 4: Electric Charge (in Electron Charge units) (Or, is there a Planck unit for electric charge?)

Column 5: Spin

Column 6: Color Charge

(Table needs to be sorted (ascending/descending) by column 1 values, by default.)

https://en.wikipedia.org/wiki/Chemical_element
https://en.wikipedia.org/wiki/List_of_chemical_elements
https://en.wikipedia.org/wiki/Atomic_number
https://en.wikipedia.org/wiki/Mass_number
https://en.wikipedia.org/wiki/Periodic_table
https://en.wikipedia.org/wiki/Alternative_periodic_tables
https://en.wikipedia.org/wiki/Standard_Model

Saturday, October 28, 2017

Production Of Quantum Particles

How quantum particles maybe produced by our Universe?

Assume our reality is created by a CA QC operating at Planck Scale.
Assume it creates a Planck Scale Particle based fluid medium, just like LBM (CA) creates 2d/3d fluid simulation.
Assume, when that fluid medium starts boiling, it creates bubbles (which are its quasiparticles).
And since the cells of the CA QC are qubit (register(s)) based, those bubbles/quasiparticles have quantum properties.

So assume, our universal fluid medium, creates bubbles/quasiparticles (quantum particles),
as (positive/negative energy) virtual/real single/pair particle/antiparticle, depending on local conditions.

Assume, our perception of spacetime is created by virtual particles of quantum vacuum.
Assume, gravitational field is polarization of spacetime.
Assume, positive spacetime curvature is actually quantum vacuum producing more positive energy virtual particles than negative.
Assume, negative spacetime curvature is actually quantum vacuum producing more negative energy virtual particles than positive.
(So Casimir Force is actually creating artificial gravity/anti-gravity!)
And if the (positive/negative) curvature is beyond necessary threshold, then a real particle (pair) is produced, instead of a virtual particle (pair).

So we can say:
Amplitude of spacetime curvature decides virtual or real particle (pair) will be produced.
Sign of spacetime curvature decides positive/negative energy/mass particle (pair) will be produced.
Polarization/Rotation/Spin of spacetime curvature (?) decides particle and/or anti-particle will be produced.

Matter And Dark Matter

Assume, in the beginning of The Big Bang, the Universe was a ball of positive energy, in the middle of a medium of negative energy.
Later it started absorbing negative energy and so started expanding.
As its positive energy density dropped below a threshold, DM particles got created near uniformly everywhere. As the Universe continued to expand, DM particles coalesced into filaments of the cosmic web.

The BB also created hydrogen and helium uniformly everywhere.
Later DM filaments provided guidance for matter, stars and galaxies to form. But we must realize this view leads to Baryon Asymmetry Problem!

What if, matter of our Universe got created thru a different mechanism, which is asymmetric?

If we look at our Universe, it looks like matter is coalesced in the central regions of DM filaments/clouds. What if matter is not coalesced, but got created in those central regions of DM clouds?

What if, whenever wherever DM cloud density goes above a certain threshold, particles of Standard Model are created, without their anti-particles? (And then later DM cloud density would drop below the threshold there, like a negative feedback mechanism. And if so that would mean total amount of DM in the Universe must be decreasing over time!)

And what if, DM particles are gravitons with extremely low mass/energy, and so with extremely large size (Compton Wavelength)?
So that maybe why we cannot detect them directly and why they cannot join with each other to create a BH etc. (There maybe a similar rule for them like Pauli Exclusion Principle?)

About Graviton from Wikipedia:

"The analysis of gravitational waves yielded a new upper bound on the mass of gravitons, if gravitons are massive at all. The graviton's Compton wavelength is at least 1.6×10^16 m, or about 1.6 light-years, corresponding to a graviton mass of no more than 7.7×10^-23 eV/c2.[17] This relation between wavelength and energy is calculated with the Planck-Einstein relation, the same formula which relates electromagnetic wavelength to photon energy."

https://en.wikipedia.org/wiki/Dark_energy
https://en.wikipedia.org/wiki/Dark_matter
https://en.wikipedia.org/wiki/Graviton
https://en.wikipedia.org/wiki/Pauli_exclusion_principle
https://en.wikipedia.org/wiki/Pair_production
https://en.wikipedia.org/wiki/Two-photon_physics
https://en.wikipedia.org/wiki/Baryon_asymmetry
https://en.wikipedia.org/wiki/Standard_Model

Tuesday, October 24, 2017

Spacetime Curvature And Speed Of Light

What if Gravity is the 5th emergent dimension? (So mass/energy of a particle is its gravity dimension location (+ or -).
(2D surface of a sphere is bent in 3rd dimension. 4D spacetime is bent in the 5th (Gravity) dimension whenever (+ or -) energy/mass is present.)

When a positive spacetime curvature is present, speed of light must slowdown passing from that location. (Just like light slows down when it enters water from air and refracts.)
And if so, then how index of refraction and current speed of light can be calculated for any spacetime location?

Spacetime curvature (which we can calculate) determines deflection angle (which we can also calculate).
Using Snell's Law:
sin(t0)/sin(t1)=v0/v1=n1/n0
Also if c is the speed when there is no curvature.
And if we plug in the values we know/assume then this is what we have:
sin(t0)/sin(t1)=c/v1=n1/n0

We can calculate total bending (deflection) angle of light (in radians) in General Relativity:
deltaPhi=4*G*M/C^2/R (M:Mass in kg; R:Distance from center in meters; C:Speed of light in m/s; G:6.7E-11)

Assume incoming angle of light is 90 degrees (pi/2 radians) (for refraction index=1 because n=c/v and no spacetime curvature in the first medium):
=> 1/sin(t1)=c/v1=n1/1 =>
1/sin(deltaPhi)=c/v1=n1/1 =>
1/sin(4*G*M/c^2/r)=c/v1=n1/1 =>
n1=c/v1=1/sin(4*G*M/c^2/r) (Index of refraction for any spacetime location bending light)
(=> Possible extreme values:1/0=inf or -inf depending on direction of approach;1/1=1;1/-1=-1 => Range: -inf to +inf)
v1=c/n1=c*sin(4*G*M/c^2/r) (current speed of light for any spacetime location bending light)
(=> Possible extreme values:c*0=0;c*1=c;c*(-1)=-c => Range: -inf to +inf)
(Negative c would mean time is flowing backwards there!? c is the flow rate of time (event information flow (perception) rate) anywhere.)
(So it is not possible to make time move faster than c but it can be slowed and its direction maybe changed using negative energy/mass.)
(Light slows down in (positive) gravitational field because it is denser from light point of view. Imagine more positive energy/mass virtual particles on the way.

Gravitational field is actually local polarization of the virtual particle (each with + or - energy/mass) balance at any spacetime location.) If the total net energy is negative then curvature would be negative. Then index of refraction would also be negative.

(The speed of light anywhere is speed of information flow between the CA cells which determine perception of events in Relativity by any observer.)

https://en.wikipedia.org/wiki/Refractive_index
https://en.wikipedia.org/wiki/Snell%27s_law

Sunday, October 22, 2017

Geometry of Our Universe 2

http://scienceblogs.com/startswithabang/2017/10/22/comments-of-the-week-final-edition/

Ethan wrote:
"From Frank on the curvature of the Universe: “What if Universe is surface of a 4d sphere where 3d surface (space) curved in the 4th dimension (time)?”"
"Well, there is curvature in the fourth dimension, but the laws of relativity tell you how the relationship between space and time occur. There’s no wiggle-room or free parameters in there. If you want the Universe to be the surface of a 4D sphere, you need an extra spatial dimension. There are many physics theories that consider exactly that scenario, and they are constrained but not ruled out."

Then what if I propose, gravitational field across the Universe is the fifth dimension (for the Universe to be the surface of a 4D sphere)? (And also think about why it seems gravity is the only fundamental force that effects all dimensions. Couldn't it be because gravity itself is a dimension, so it must be included together with other dimensions (of spacetime) in physics calculations.)

And why it is really important to know general shape/geometry of the Universe?

I think then we can really answer whether observable universe and global universe are the same or not, and if they are the same then we would also know that the Universe is finite in size. (And we could also calculate general curvature of the Universe for anytime, which would help cosmology greatly, no doubt.)

I am guessing currently known variations in CMB map of the Universe, match to the distribution of matter/energy in the observable Universe, only in a general (non-precise) way. I think, if the Universe is really the 3d (space) surface of a 4D sphere, curved in the 4th dimension (time), (with gravity as the 5th dimension), then, we could use CMB map of the Universe as CT scan data, and could calculate 3d/4d matter/energy distribution of the whole Universe from it. And then, if it matches (as a whole) to the matter/energy distribution of our real observational Universe, (which coming from other (non-CMB) observations/calculations), then we could know for sure, whether our observational and global Universes are identical or not. (If not, then by looking at the partial match, maybe we could still deduce how large really is our global Universe.)

Further speculation:

Let's start with, spacetime is 4D (3 space dimensions and a time dimension).
Gravitational curvature at any spacetime point must be a 4D value => 4 more dimensions for the Universe.
If electric field at any spacetime point is a 4D value => 4 more dimensions for the Universe.
If magnetic field at any spacetime point is a 4D value => 4 more dimensions for the Universe.
Then the Universe would have 4+4+4+4=16 dimensions total!
(Then the dimensions of the Universe could be 4 quaternions = 2 octonions = 1 sedenion.)
(But if electric and magnetic fields require 3d + 3d, then the dimensions of the Universe would be 4+4+3+3=14 dimensions!)

20171028:
If our Universe has 16 dimensions and if our reality is created by a CA QC at Planck Scale, then its cell neighborhood maybe like a tesseract or a double-cube (16 vertices). Or if our Universe has 14 dimensions and if our reality is created by a CA QC at Planck Scale, then its cell neighborhood maybe like a Cube-Octahedron Compound or Cube 2-Compound (14 vertices).

(20171104) What if Kaluza–Klein Theory (which unites Relativity and Electromagnetism, using a fifth dimension), is actually correct by taking gravitational field across the universe as the fifth (macro/micro) dimension? (Maybe compatibility with Relativity requires taking it as a macro, and QM requires taking it as a micro dimension? (Which would be fine!?))

(20171115) According to Newton Physics, speed of any object in the Universe always is:
|V|=(Vx^2+Vy^2+Vz^2)^(1/2) or V^2=Vx^2+Vy^2+Vz^2
But according to Special Theory of Relativity, it really is:
C^2=Vx^2+Vy^2+Vz^2+Vt^2 which also means Vt^2=C^2-Vx^2-Vy^2-Vz^2 and so |Vt|=(C^2-Vx^2-Vy^2-Vz^2)^(1/2)
So, if gravitational field across the Universe is actually its 5th (macro) dimension then:
C^2=Vx^2+Vy^2+Vz^2+Vt^2+Vw^2 which also means Vw^2=C^2-Vx^2-Vy^2-Vz^2-Vt^2 and so |Vw|=(C^2-Vx^2-Vy^2-Vz^2-Vt^2)^(1/2)
(Is this the equation to calculate spacetime curvature from 4D velocity in General Relativity?)
(Equivalence Principle says gravity is equivalent to acceleration => Calculate its derivative?)

Saturday, October 21, 2017

Explaining Masses of Elementary Quantum Particles

How we can explain masses of elementary quantum particles?

All elementary quantum particles have energy, some in the form of (rest) mass. Then (rest) mass value of each particle is just 0 or 1.

Then what really needs to be explained is energy distribution (order) of list of elementary quantum particles.

We already know energy of each particle is quantized (discrete) in a Planck unit. (Then energy of each elementary particle is an integer.) And Compton Wavelength of each particle can be seen as its energy/size.

Then what needs to be explained is this:

Imagine we made a (sorted) bar chart of energies of elementary quantum particles. Then, is there a clear order of how energy changes from lowest to highest?

Or what if we made a similar sorted bar chart of particle Compton Wavelengths?

Or what if we made a similar sorted bar chart of particle Compton Frequencies?

Realize that the problem we are trying to solve is a kind of curve fitting problem.

Also realize we are really treating the data as a time series here.
But how do we know really, if our data is a time series?

Also realize that, if we consider the case of sorted bar chart of particle Compton Frequencies, then what we really have is a frequency distribution (not a time series).

Wikipedia says: "The Fourier transform decomposes a function of time (a signal) into the frequencies that make it up"

Then what if, we apply Inverse Fourier Transform to the Compton frequency distribution of elementary quantum particles?

Would not, we get a time series that we could use for curve fitting?

(Also, would not be possible then, that curve we found, could allow us to predict, if there are any smaller or larger elementary particles which we did not discover yet?)

https://en.wikipedia.org/wiki/Fourier_transform
https://en.wikipedia.org/wiki/Curve_fitting
https://en.wikipedia.org/wiki/Time_series

Wednesday, October 18, 2017

Geometry of Our Universe

The following are my comments recently published at:
http://scienceblogs.com/startswithabang/2017/10/14/ask-ethan-is-the-universe-finite-or-infinite-synopsis/

@Ethan:
“If space were positively curved, like we lived on the surface of a 4D sphere, distant light rays would converge.”
Think of surface of a 3d sphere first:
It is a 2d surface curved in the 3rd dimension.
Now think of surface of a 4d sphere:
It is a 3d surface curved in the 4th dimension.
What if Universe is surface of a 4d sphere where 3d surface (space) curved in the 4th dimension (time)?
So is it really not possible, 3d space we see using our telescopes, could be flat in those 3 dimensions of space, but curved in time dimension?

First let me try to better explain what I mean exactly:
Let’s first simplify the problem:
Assume our universe was 2d, as the surface of a 3d sphere. Now latitude and longitude are our 2 space dimensions. Our distance from the center of the sphere is our time dimension.

Since our universe is the surface of a 3d sphere, it has a general uniform positive curvature, depending on our time coordinate, anytime.

Now the big question is this:
As beings of 2 dimensions now, can we directly measure the global uniform curvature of our universe in any possible way? Or asking the same question in another way would be this: Our universe would look curved or flat to us?

If speed of light was high enough, and if we had an astronomically powerful laser, we could send a beam in any direction, and later see it came back from exact opposite direction, sometime later.
Then we would know for certain our universe if finite.
But I claim, we still would not know what is the general curvature of our universe.

Could we really find/measure it by observing the stars or galaxies around, in our 2d universe?

For answer, first realize we don’t know any poles for our universe. We can use any point in our 2d universe as our North Pole, would it make any difference for coordinates/measurements/observations?
Then why not take our location in our 2d universe as the north pole of our universe.

Now try to imagine all longitude lines coming into our location (the north pole our coordinate system) as the star/galaxy lights.
Can we really see/measure the general curvature of our universe from those light beams coming to us from every direction we can see?
I claim the answer is no.

Why? I claim, as long as we are making all observations and experiments, to calculate the general curvature, using only our space dimensions (latitude and longitude),
we would always find it to be perfectly flat in those 2 dimensions. I also claim, we could calculate the general curvature of our 2d universe (latitude and longitude), only if we include the precise time coordinates in the measurements/experiments, as well as precise latitude and longitude coordinates.

So I really claim, our universe looks flat to us, because we are making all observations/measurements in 3 space dimensions. But if we also include time coordinates, then we can calculate true general curvature of our universe.

And I further claim:

Curvature of circle (1d curved line on 2d space):
1/r

Curvature of sphere (2d curved plane on 3d space):
1/r^2

Curvature of sphere (3d curved space on a 4d space):
1/r^3

So if our universe was 2d space and 1 time (2d curved plane on 3d space):
Its general curvature at any time would be:
1/r^2=1/(c*t)^2 (where c is the speed of light and t time passed since The Big Bang in seconds)

And so if our universe is 3d space and 1 time (3d curved space on 4d space):
Then its general curvature at any time is:
1/r^3=1/(c*t)^3 (where c is the speed of light and t time passed since The Big Bang in seconds)

And I further claim:

If astrophysicists recalculated general curvature of our universe, by including all space and time coordinate information correctly, then they should be able to verify, the calculation results always match to the theoretical value which is 1/(c*t)^3 .

The raw data to use for those calculations would be the pictures of universe, for the same direction, looking at views there from different times.

I realized this value for the current general curvature of our universe (1/(c*t)^3) would be correct only if we ignore the expansion of the universe. To get correct values for any time, we need to use current radius of the universe for that time, including effect of the expansion until that time.

Wikipedia says:
“it is currently unknown whether the observable universe is identical to the global universe”

From what I claimed above, I claim they are identical.

(So if the current radius of observational universe is 46 Bly, then I claim it means current global curvature of our universe is 1/(46 Bly in meters)^3.)

Dark Matter and Nature of Gravitational Fields And Spacetime

The following are my comments recently published at:
http://scienceblogs.com/startswithabang/2017/10/10/missing-matter-found-but-doesnt-dent-dark-matter-synopsis/

“Neutral atoms formed when the Universe was a mere 380,000 years old; after hundreds of millions of years, the hot, ultraviolet light from those early stars hits those intergalactic atoms. When it does, those photons get absorbed, kicking the electrons out of their atoms entirely, and creating an intergalactic plasma: the warm-hot intergalactic medium (WHIM).”
So the UV light from earliest stars keeping the intergalactic gas hot (and does it perfectly for all gas atoms somehow).
But how it is possible that UV light photons stayed same after billions of years of expansion of universe?
I have a really crazy idea on this WHIM which maybe a better explanation though:
What if WHIM is no ordinary gas?
What if WHIM is an effect similar to Hawking Radiation?

What if spacetime is created by virtual particles as an emergent property? What if Gravitational Fields are polarization of spacetime? (Where positive curvature indicates probabilities of positive energy/mass virtual particles are higher in that region and negative curvature indicates probabilities of negative energy/mass virtual particles are higher in that region.)

In case of WHIM, imagine Dark Matter particles increase probabilities of positive energy/mass virtual particles and we observe it as hot gas.

Imagine any (+/-) unbalanced probabilities for virtual particles, on the path of light rays, act like different gas mediums that change the local refractive index, so the light rays bend.

And in case of BHs, imagine probabilities of positive energy/mass virtual particles increase so much nearby, some of those particles turn real, that we could observe as Hawking Radiation.

I just realized if my ideas about true nature of spacetime and gravitational fields (stated above) are correct then it would mean Casimir Force actually can be thought as creating artificial gravity, like in Star Trek for example.

I am guessing if positive spacetime curvature slows down time then negative should speed it up. Then if Casimir Force is creating spacetime curvature, and since we can make it negative in the lab, then we can make time move faster, and it maybe measurable in the lab.

I wonder if we could use sheets of Graphene like Casimir Plates and stack them as countless layers to create a multiplied Casimir Force generator. Then we could also add a strong electric and/or magnetic field to amplify that force. Would a device like that could create human weight level strong artificial gravity field?

Imagine you made bricks of artificial gravity generators.
Imagine a spaceship (or spacestation) with a single floor of those bricks. Imagine the crew walks on top and bottom of that single floor (upside-down to each other). So you have a kind of symmetric (up-down) 2 floor internal spaceship design.

Also what if those brick can also create artificial anti-gravity?
(Wikipedia says we can generate both attracting or repelling Casimir Force.) If that is possible, imagine each floor of spaceship is 2 layer of bricks. Top layer generates gravity, bottom layer generates anti-gravity. People on top feels downward force of gravity but people on the lower floor does not feel upward force of gravity, because the anti-gravity layer (which they are closest) cancels out total gravity to zero for them.

I wonder what would happen if we somehow created artificial gravity in front of a spaceship and artificial anti gravity in the back? Could that cause the spaceship to move forward faster and faster, like keep falling in a gravity well?

If we can create artificial anti-gravity, I think it could be also useful as a shield in space, against space dust etc.

What if Planck particle is the smallest and Dark Matter particle is the biggest size/energy particle of the Universe?

Unpublished additional comments:

If we can create positive and negative artificial gravity (using Casimir Force), and put them side by side to create movement, then what if we do it with a rotor of an electricity generator? (+- Casimir Force could be generated using multiple layers of Graphene sheets as Casimir Plates, and maybe amplified with a max strong permanent magnet.) And if that worked, would it mean creating free energy from spacetime itself (Zero-Point Energy)?

Thursday, October 12, 2017

Equivalence Principle

Why inertial and gravitational mass is always equal?

Assume Newton's second law (F=m*a) is true.
Assume we used a weighing scale to measure the gravitational mass of an object on the surface of Earth. A weighing scale actually measures force. But since we know (free fall) acceleration is the same for all objects on the surface of Earth, we can calculate gravitational mass of the object as:
m=F/a

Now imagine a thought experiment:

What if gravity of Earth instantly switched to anti-gravity (but with same magnitude as before)?
Then the object would start accelerating away from Earth. What if we try to calculate inertial mass of the object by measuring its acceleration? Realize the magnitude of that acceleration would be still the same for all objects, but with reverse sign, since direction of acceleration is reversed. Then we have:
m=(-F)/(-a)=F/a

We assumed that magnitude of gravitational acceleration is the same for all objects. Because a=F/m and F=G*M*m/d^2 then a=G*M/d^2 for all objects on the surface of Earth (M: Earth mass; m: Object mass).

So Newton's second law, combined with Newton's Law of Gravity, lead to inertial and gravitational mass always being equal. Then to prove Equivalence Principle, we would need to prove Newton's laws first.

Newton's Law of Gravity (F=G*M*m/d^2) works the same way as Coulomb's Law (F=k*Q*q/d^2) which describes static electric force which is a Quantum Force. Isn't that mean Newton's Law of Gravity can be explained with Quantum Mechanics, or at least it is compatible with QM?

Newton's second law can be explained with QM?

https://en.wikipedia.org/wiki/Equivalence_principle
https://en.wikipedia.org/wiki/Mass#Inertial_vs._gravitational_mass
https://en.wikipedia.org/wiki/Newton%27s_law_of_universal_gravitation
https://en.wikipedia.org/wiki/Coulomb's_law

Sunday, October 8, 2017

The Quest For Ultimate Game Between Humans And Computers


I think Game Theory is one of the main branches of Computer Science.
A lot is known about theoretical and practical complexity of common games like Chess, Go, Checkers, Backgammon, Poker and their so many possible variants. Like how hard they are for classical (and quantum?) computers in basic brute force search view point, or in multiple general smart search algorithms view points, or in best known customized search points of view.

In recent years there were multiple big game matches between human grand masters and classical computer software (set of algorithms) running on various types of computers, with different processing speed, number of processors, number of processing cores, memory size and speed. First I heard was a then-world-champion human lost to a (classical) computer in Chess. Later I heard about a human grand master lost to a (classical) computer in Go.

One may think humans eventually will lose against classical computers in any given game, and for against quantum computers (which are much more powerful) humans will never have any chance.
But if we look at the current situation closer I think it is still unclear.

Are those famous Chess and Go matches between human grand masters and classical computers were really fair for both sides?
I think not. In both cases both software analyzed countless historical matches and became expert on every move of those matches.
Which human grand masters have such knowledge/experience and would be able to recall any of them at any moment in a game they are playing? Can there be any more fair way?

What if a Chess/Go software (intentionally) started the game matches, with having no knowledge of any other past game matches other than its own (games it played against itself)? And also isn't it obvious a human grand master would best recall the games he/she played himself/herself in the past? Wouldn't a Chess/Go match between a human grand master and a computer be much more fair with such a constraint for the computer side?

Can we make game matches between humans and (classical) computers even more fair?

I think humans lost at Chess first, because number of possible future moves does not increase (exponentially) fast enough, so a classical computer of today is able to handle it well.
In Go however, number of possible future moves does increase (exponentially) fast enough. The computer software used a deep learning ANN software, instead of relying on its ability to check so many possible future moves. So unlike in Chess, the computer did not have powerful future foresight ability. But is this mean, computers would eventually beat any human at any similar board game, using an ANN and/or future foresight ability?

I think it is possible ANN approach worked successfully for Go because its rules are much simpler than Chess as an example. I don't think there is any evidence (at least not yet) ANN approach would always work against any board game. Also consider board size for chess (8 by 8) is much smaller than Go (19 by 19), which means number of possible future moves does increase much faster for Go, so a (classical) computer cannot handle it.

How about we try combine the strength of Go (against future foresight) with rule complexity of Chess? For example there is a variant of Chess called Double Chess that is played on a 12 by 16 board. I think we could reasonably expect a game match between a human (grand) master and any classical computer (hardware + software), to be much more fair for both players than any past matches. I think because number of possible future moves should increase similar to Go (if not even faster) because of closer board size and usage of multiple kinds of game pieces (which are able to move in different ways). Also consider how many high quality past game examples would be available to learn/memorize for both sides, which I am guessing should not be so many for Double Chess.

So if we used Double Chess for game matches between humans and computers, can we find out the ultimate winner for sure? What if the computer wins again, would that really mean the end for human side for sure?

Assuming we lost again, what if we created an even more complex (for both humans and computers) variant of Chess by using an even larger board? Like if we turned Double Chess into Double Double Chess?
And/or what if we added few of the proposed new chess pieces to the game? Could then we really create a board game that no classical computer (hardware + software) could ever beat a human master player?

Why this is important?
Because I think the question actually goes far beyond deciding the final outcome of a friendly and fair battle between the creators and their creations. What is human brain really? It is an advanced classical computer or a quantum computer or an unknown kind of computer? How human grand masters of Chess/Go play the game compared to computers? Are humans rely only on past knowledge of the game playing and future foresight as much as they can manage?
Or humans have much more advanced algorithms running in their brain compared to computers? I think how a human player decides game moves is definitely similar to how an ANN algorithm does it but it is still beyond that. Think about how we make decisions in our daily lives in our brains every moment. Any given time we have a vast number of possibilities to think about. Do we choose what to think about every moment randomly? If there are certain probabilities (which depends on individual past life experiences), how we make choices between them every moment, again and again, fast. I think most reasonable explanation would be if our brains are, not classical, but quantum computers. (So neurons must be working like qubit registers.)

And if that is really true, it would mean no classical computer (hardware and software) could ever beat a human brain in a fair game.

(Also if human brain is a quantum computer, how about the rest of human body? The possibilities would be Quantum Computer (QC), classical computer (Turing Machine (TM)), Pushdown Automaton (PDA), Finite State Machine (FSM). To decide, I think we could look at (Functional) Computer Models of biological systems. Are they operate like FSM, PDA, TM, QC? Do their algorithms have conditional branches, conditional loops like a program for a TM? Or they always use simple state transitions like a FSM? I don't know much about how those modelling algorithms work; My guess is they are like TM (which would mean human body (except brain) operate like a classical computer.))

https://en.wikipedia.org/wiki/Game_theory
https://en.wikipedia.org/wiki/Computer_chess
https://en.wikipedia.org/wiki/Computer_Go
https://en.wikipedia.org/wiki/List_of_chess_variants
https://en.wikipedia.org/wiki/Double_Chess
https://en.wikipedia.org/wiki/Automata_theory
https://en.wikipedia.org/wiki/Finite-state_machine
https://en.wikipedia.org/wiki/Pushdown_automaton
https://en.wikipedia.org/wiki/Turing_machine
https://en.wikipedia.org/wiki/Quantum_computing
https://en.wikipedia.org/wiki/Modelling_biological_systems

Saturday, October 7, 2017

What If Reality Is A CA QC At Planck Scale?

What If Reality Is A CA QC At Planck Scale?

Can we make any predictions to check if we can, if this idea in the title above is assumed true?

What our experiments and observations tell us at macro scale, where Relativity seems to be ruling, there is no indication of quantization of spacetime nor gravity.
But at micro scale, where Quantum Mechanics seems to be ruling, it seems all units are quantized (discrete) in terms of Planck units.
So Quantum Mechanics seems directly compatible and I think Relativity is not directly compatible but indirectly compatible, if Relativity is assumed to be an emergent property.
(For example, simple CA used for fluid simulation which are discrete in micro scale, but create a seemingly continuous wold of classical fluid mechanics (Navier-Stokes Equations).)

If our reality is really created by a (as structurally and also as cell state values always discrete) CA QC operating at Planck scale then I would think:

Any time duration divided by Planck Time must be always an integer.

Any length divided by Planck Length must be always an integer.

Compton Wavelength of any quantum particle divided by Planck Length must be always an integer.

De Broglie Wavelength of any quantum particle divided by Planck Length must be always an integer.

If minimum possible particle energy (unit particle energy) is the energy of a photon that has wavelength equal to Planck Length,
then (Compton Wavelength of any quantum particle divided by Planck Length) must be how many units of particle energy that particle is made of.
(If so then, if there is any mathematical order in masses of elementary particles, then maybe it must be searched after
converting their Compton Wavelengths to integers (by dividing each with Planck Length)?)
(Also energy of a Planck particle (in a BH) must be max energy density possible in the universe?
(If so then energy of Planck particle (or its density?) divided by unit particle energy, is how many possible discrete energy levels (total number of states) per Planck cell?))

Also I think since all quantum particles known to be discrete in Planck units (which are known to be smallest possible units of space, time, wavelength, frequency, amplitude, phase, energy, still possibly also mass), is implying (or compatible with) all known (and maybe also unknown) quantum particles could be actually some kinds of quasi-particles (which I think could be described as clusters of state information), created by The Reality CA QC At Planck Scale (TRCAQCAPS? :-).

At least my interpretation of it is that Stephen Wolfram in a lecture had explained neighborhood of a (any) CA is related to its structural dimensions.
From that and I think since we also know our universe/reality is at all scales, seems to be 3 space dimensions plus a time dimension everywhere and when,
we could conclude the CA part of our reality, should have 4 neighbors for each cell in whatever physical arrangement is chosen between the all physical possibilities.
For example, if Von Neumann neighborhood physical arrangement is chosen, it would imply we are talking about a 2D square lattice CA.
Or it could it be each center cell is connected (physically touching) 4 neighbors located around like four vertex corners of a regular tetrahedron.
Are there any other physical cell arrangement possibilities I do not know.

Also I think all physical conservation laws like conservation of energy are implying the CA rules must be always conserving information (stored by the cells).

But what are the full range of possibilities for the internal physical structure/arrangement of the CA cells?

I think first we would need to determine what discrete set of state variables (made of qubit registers each) each CA cell needs to store.
I think if we want the CA to be able to create all quantum particles as quasiparticles then then each cell would need to store all basic internal quantum particle wave free variables as discrete qubit information units.
Assuming each cell is made of a physical arrangement of a total of N individual single qubit storage subcells,
and from what we know about both discrete wave and particle nature of quantum particles, I think it should possible to determine how many qubits at least for each free state variable is needed.

But do we really know for certain, the CA cells would need to store only quantum particle information?

Would not they also need to store discrete state information about local spacetime?
Because it definitely seems spacetime can bend even when it contains no quantum particles, like around any massive object.
Then the question is what spacetime/gravity state information the all CA cells would need to store, also.
Since gravity is bending of spacetime (which would be flat without gravity), and the local bending state (and more) everywhere is described by Einstein Field Equations,
we must look into how many free variables those equations contain,
and how many qubits (at least) would be needed, (to express any possible/real value of spacetime state), to store each of those free variables.

But what if the CA cells do not really need to store spacetime state information?
I had read that equations of Relativity are similar to equations of thermodynamics, which are known to "emerge from the more fundamental field of statistical mechanics".
Yes it seems spacetime can still bend even when it contains no real quantum particles but isn't it always contain virtual particles?
(According to QM, virtual particle pairs, where always one particle has positive and the other has negative energy/mass, pop in and out of existence for extremely short durations, everywhere.)
(I think those pair of virtual particles must be going out of existence by colliding back and so their energies canceling out.)
Realize that what determines bending state of spacetime anywhere is the existence of real quantum particles there.
If there are lots of real quantum particles with positive energy/mass then the spacetime has positive curvature there.
And if there were lots of real quantum particles with negative energy/mass) then the spacetime would have negative curvature there.
What if total curvature state of any spacetime volume is completely determined by the balance (and density) of positive and negative quantum particles there?
(Meaning, if the spacetime curvature is positive somewhere then it means, if we calculated total positive and negative energy from all real and virtual particles there then we would find positive energy is higher, accordingly. And vice versa, if the spacetime curvature is negative somewhere then it means total negative energy is higher, accordingly.)
What this would mean, where there is a gravitational field but no real (positive energy) particles?
I think it would mean, the number of positive energy virtual particles must be higher than the number of negative energy virtual particles there, any given time.
The consequence of this for the CA cells would be, they would only need to store (positive/negative) quantum particle state information; no spacetime state information.

And if we could really determine exactly how many physical qubits each of the CA cells (at least) would need,
then we could research on physical arrangement possibilities for internal physical structure of the CA cells.

A reader maybe noticed that a big assumption for some of above ideas is physical realism.
Because I think if we don't really need physical realism (plausibility), then how we can hope to make any progress on solving the problem of reality, if it is not physically realist itself? :-)

I think a prediction of this TRCAQCAPS idea is that Black Holes must be made of Planck particles.
(Imagine size (Compton Wavelength) of any quantum particle keeps getting smaller with increasing gravity until finall its Compton Wavelength becomes equal to its Schwarzschild radius.)
I think Hawking Radiation implies BHs have at least a surface entropy, indicating discrete information units/particles in units of Plack area.
I think that could be how a BH would look from observers around, and actual total entropy of a BH could be Event Horizon volume divided by Planck (particle/unit?) volume.

I think if spacetime is disrete at Planck scale, maybe the Holometer experiment could be helpful to prove it someday.

Could a Gravitational Wave detector in space someday find evidence of GW discretization (and therefore spacetime)?

I recently read a news (some links I found referenced below) about a new kind of atomic clock using multiple atoms altogether to get a (linearly/exponentially? (based on number of atoms)) more stable time frequency.
I am guessing (did not fully read all the news about it) it must be done by forcing the atoms (oscillators) into synchronization somehow.
Which brings the question, what is the limit for measuring time durations in terms of resolution?
Atomic Clocks will someday finally reach Planck Time measurement scale (and directly show time is discrete in Planck Time units)?

(On a side note, could we create a chip that contains a 2D/3D grid of analog/digital oscillator circuits, and force them to synchronization somehow to reach an Atomic Clock precision?)

My sincere hope is ideas presented above someday could lead to testable/observable predictions about finding out the true nature of our universe/reality.

https://en.wikipedia.org/wiki/Theory_of_relativity
https://en.wikipedia.org/wiki/Quantum_mechanics
https://en.wikipedia.org/wiki/Cellular_automaton
https://en.wikipedia.org/wiki/Von_Neumann_neighborhood
https://en.wikipedia.org/wiki/Tetrahedron
https://en.wikipedia.org/wiki/Quantum_computing
https://en.wikipedia.org/wiki/Planck_particle
https://en.wikipedia.org/wiki/Holometer
https://en.wikipedia.org/wiki/Atomic_clock
https://www.livescience.com/60612-most-precise-clock-powered-by-strontium-atoms.html
https://www.engadget.com/2017/10/06/researchers-increased-atomic-clock-precision/?sr_source=Twitter
https://www.digitaltrends.com/cool-tech/worlds-most-precise-atomic-clock/

Friday, October 6, 2017

Emergent Property Problem

Emergent properties are everywhere in physics.
Some of the biggest ones:
Chemistry is the emergent property of Quantum Mechanics.
Biology is the emergent property of Chemistry.
Psychology is the emergent property of Biology.
Sociology is the emergent property of Psychology.

I think Quantum Mechanics (and Relativity) is also an emergent property of a Cellular Automaton Quantum Computer (CAQC) operating at Planck scale. If so how we can find out its operation rules?

How about we try to understand the general mathematical problem first?

The problem is this:
We are given the high level (macro scale) rules of an emergent property and asked, what are the low level (micro scale) rules which created those high level rules?
(Also the reverse of this problem is another big problem.)

Could we figure out rules of Quantum Mechanics, only from rules of Chemistry (and vice versa)?

When we try to solve a complex problem, obviously we should try to start with a simpler version of it, whenever possible.

There are many methods for Computational Fluid Dynamics (CFD) simulations. If we were given 2D fluid simulation videos of certain resolution and duration for each different method, could we analyze those videos using a computer software to find out which video is produced by which method? At what resolution and what duration the problem becomes solvable/unsolvable for certain? Moreover, at what resolution and what duration we can or cannot figure out the specific rules for each method?

How about an even simpler version of the problem:
What if we used two-dimensional cellular automaton (2D CA)?
Imagine we run any 2D CA algorithm using X*Y cells and for N time steps to create a grayscale video.
Also imagine, if each grayscale pixel in the video calculated as sum or average of M by M cells, like a tile.
At what video resolution and what video duration, we can or cannot figure out the full rule set of the 2D CA algorithm?

How about an even simpler version of the problem:
What if we used one-dimensional cellular automaton (1D CA)?
Imagine we run any 1D CA algorithm using X cells and for N time steps to create a grayscale video.
Also imagine, if each grayscale pixel in the video calculated as sum or average of M cells, like a tile.
At what video resolution and what video duration, we can or cannot figure out the full rule set of the 1D CA algorithm?

(And the reverse problem is this:
Assume the grayscale video described above for 1D/2D CA, shows the operation of another CA (which is the emergent property).
Given the rule set of any 1D/2D CA, predict the rule set of its emergent property CA for any given tile size.)

Also what if the problem for either direction has a constraint?
For example, what if we already know, the unknown 1D/2D CA we trying to figure out, is a Reversible CA?

https://en.wikipedia.org/wiki/Cellular_automaton
https://en.wikipedia.org/wiki/Elementary_cellular_automaton
https://en.wikipedia.org/wiki/Reversible_cellular_automaton

Thursday, September 14, 2017

POWER OF QUANTUM COMPUTERS

It is clear that when it comes to solving numerical search problems like Integer Factorization, quantum computers allow us to find the solution(s) instantly.
We just setup the problem (multiply two unknown integers and get an unknown integer result, set the unknown result to a result we want) and instantly the input integers become known.
So quantum computers are infinitely more powerful than regular computers for solving numerical search problems.

But we use regular computers also for symbolic calculation.
(CAS (Computer Algebra System) software like Mathematica, Maple etc.) What more quantum computers could provide when it comes to symbolic calculation?

I think they could provide the same benefit as for numerical calculation. Meaning instantly solving symbolic search problems.
Imagine if we could just setup an equation expression string as input, then quantum computer sets the output string (with unknown value) to a general solution expression (known value), if such solution really exists/possible.
For example:
1)
Input string: "a*x^0+b*x^1=0"
String value search problem: "x=?"
Output string: "-a/b"
2)
Input string: "a*x^0+b*x^1+c*x^2=0"
String value search problem: "x=?"
Output string: "(-b+(b^2-4*a*c)^(1/2))/(2*a)"

I think using quantum computers for symbolic calculation should allow us solving many important such problems which we cannot solve with regular computers in a practical time.
I am guessing those would even include some Millenium Prize Problems like finding (all) general solution expressions for Navier-Stokes equations (and proving Riemann Hypothesis?).

I think, assuming we will have a general purpose suitable quantum computer someday, only issue is figuring out exactly how to express and solve symbolic calculation problems like the two examples above.

Let's try to solve the first problem using a quantum computer:
Assuming quantum computer symbolic calculated the solution (expression string E), how we could test it to be correct or not?
How about creating an equation that would be true only if E is a valid solution, which is the input equation itself, then:
"a*E^0+b*E^1=0" or "a+b*E=0"
Then I think the solution algorithm for the quantum computer would be:
Start with unknown values E, a, b.
Calculate a+b*E (not numerical calculation but symbolic expression calculation, using an expression tree).
Set the unknown calculation result to 0.
Unknown string E collapses to the answer: "-a/b"

And if we consider how we could do the symbolic calculation step above using a regular computer, which requires manipulating an expression tree using stack(s), then we need figure out how to create a quantum stack using a quantum computer.
(Imagine a stack that can do any number of push/pop operations instantly, to collapse into its final known state instantly.)
(If we could do quantum stacks, then we also could do quantum queues.)
(And then quantum versions of other standard programming data structures would also be possible.)

What could be the most practical way to build a large scale quantum computer?

I think currently building a quantum computer is really hard because our physical world is highly noisy at quantum scale.
Imagine using single atoms/molecules as qubits.
Imagine cooling them close to absolute zero in vacuum environment that needs to be perfectly maintained.

Could there be a better way?

What if we create a quantum computer in a different level of reality, which does not have noise?

Think about our regular digital computers.
Could we think of the bit values in memory of a working regular computer, like a different level of reality of quasiparticles, which does not have noise?

Can we create an extrinsic-semiconductor-based quantum computer chip, that creates and processes qubits as quasiparticles?
(And the quantum computer designed and operated like a Cellular Automata, similar to Wireworld?)

https://en.wikipedia.org/wiki/Quasiparticle
https://en.wikipedia.org/wiki/Electron_hole
https://en.wikipedia.org/wiki/Extrinsic_semiconductor
https://en.wikipedia.org/wiki/Cellular_automaton
https://en.wikipedia.org/wiki/Wireworld

Continuum Hypothesis is False

Continuum hypothesis states "There is no set whose cardinality is strictly between that of the integers and the real numbers".

Resolution:
Express each set in question, as a set of points on (ND) Euclidean space,
and calculate their fractal dimension to compare their cardinality =>

Set of all integers => Fractal Dimension=0
Set of all real numbers => Fractal Dimension=1
Set of all complex numbers => Fractal Dimension=2
Set of all quaternion numbers => Fractal Dimension=4
Set of all octonion numbers => Fractal Dimension=8
Set of all sedenion numbers => Fractal Dimension=16
Set of all points of a certain fractal => Fractal Dimension:
Cantor set: 0.6309
Koch curve: 1.2619
Sierpinski triangle: 1.5849
Sierpinski carpet: 1.8928
Pentaflake: 1.8617
Hexaflake: 1.7712
Hilbert curve: 2

Tuesday, September 5, 2017

EXPLAINING DARK ENERGY AND DARK MATTER

If Universe/Reality (U/R) is a Cellular Automata (CA) (Quantum Computer (QC)), operating at Planck Scale (PS), then how it could explain Dark Energy (DE) and Dark Matter (DM)?

Assume Quantum Physics (QP) is its first Macro Scale (MS) Emergent Property (EP), assume Relativity Physics (RP) is its second MS EP,
then Dark (Energy & Matter) Physics (DP) could be its third MS EP!
(Just like for example, Newton (Navier-Stokes) Physics (NP) is the first Macro Scale (MS) Emergent Property (EP) of some CA, like FHP and LBM.)

Is the ratio of DM to Matter (DM/M) is always (everywhere and everywhen) constant in the Universe?
Is the ratio of DE to Vacuum Energy (DE/VE) is always (everywhere and everywhen) constant in the Universe?
(If so, could they be a consequence of DP being what is said above?)

Is every EP has a finite scale range?
(Are fluid simulation CA (like FHP/LBM) have a second layer of EP at super-macro scale (where NP no longer apply)?)

Wednesday, August 16, 2017

NATURE OF TIME

Concept of “now” being relative implies unchanging 4D “Block Universe” (so future is predictable) and it comes from Relativity.
But QM says the opposite (future is unpredictable (only there is a certain probability for any future event)).

As we look at the Universe/reality starting at microscale (particle size) and go to macroscale, future events become more and more certain.
For example, think of how certain things you plan to do tomorrow: Can’t we say they are not perfectly certain but close?
But also think of how certain motion of Earth in its orbit tomorrow. Isn’t it much more certain (but still not perfectly certain)?

Future being unpredictable in microscale and later becoming more and more predictable at higher and higher scales also happens in Cellular Automata (which used for fluid simulation).

I think one clear implication of future becoming more and more predictable at higher and higher scales is that, time must be an emergent property.
Which in turn implies spacetime must be an emergent property.
Which in turn implies Relativity must be an emergent property.

I think I had read somewhere that equations of GR is similar to equations of some kind of (non-viscous?) fluid.
If so it would make sense considering Cellular Automata used for fluid simulation shows similar behavior to GR.

I just came across a part of an article from Scientific American September 2015 that says something very similar to what I had said about nature of time:

“Whenever people talk about a dichotomy, though, they usually aim to expose it as false. Indeed, many philosophers think it is meaningless to say whether the universe is deterministic or indeterministic. It can be either, depending on how big or complex your object of study is: particles, atoms, molecules, cells, organisms, minds, communities. “The distinction between determinism and indeterminism is a level-specific distinction,” says Christian List, a philosopher at the London School of Economics and Political Science. “If you have determinism at one particular level, it is fully compatible with indeterminism, both at higher levels and at lower levels.” The atoms in our brain can behave in a completely deterministic way while still giving us freedom of action because atoms and agency operate on different levels. Likewise, Einstein sought a deterministic subquantum level without denying that the quantum level was probabilistic.”

(All my comments above also published here:
http://scienceblogs.com/startswithabang/2017/08/13/comments-of-the-week-172-from-sodium-and-water-to-the-most-dangerous-comet-of-all/)

If the future (time) becomes more and more certain as we go from microscale to macroscale, here is a thought experiment for determining how exactly that happens:
Imagine in a vacuum chamber we dropped a single neutral Carbon atom from a certain height so many times and measured/determined how close it will hit the center of the target (circular) area with how much probability. And later we repeated the experiment with C60 molecules. And later we repeated the experiment with solid balls of 60 C60 molecules. And later we repeated the experiment with solid balls of 3600 C60 molecules. ...
I think what would happen is bigger and bigger solid balls would hit closer and closer to the center with higher and higher probabilities. And general graph (an exponential curve?) of the results would tell us how exactly future (time) becomes more and more certain.

A more advanced version of the thought experiment could be this:
Imagine we started the experiment with micro balls and with a very small drop height. And as the radius of the solid balls gets bigger and bigger, we increased the drop distance with the same size increase ratio as radius.

Monday, August 7, 2017

FUTURE OF PHYSICS

If we look at history of physics, is there a clear trend to allow us to guess its future?

What are the major milestones in physics history?
I think it could be said:
1) Ancient Greece (level) Physics
2) Galileo (level) Physics
3) Newton (level) Physics
4) Einstein (level) Physics
5) TOE (level) Physics(?)

I think there is indeed a clear trend if you think about it.
Each new revolution in physics brings something like an order of magnitude increase in complexity of math (calculations), not just a new theory.
So I would guess doing calculations to solve physics problems using TOE will be practically impossible using pen and paper only.
I think it will require a (quantum) computer.
(Realize that all physics problems (where answer is possible) can be solved today using non-quantum (super) computers/calculators/pen&paper.)

I think if Universe (or Reality) turns out to be a Cellular Automata design running on an ND matrix qubit (register) quantum computer (with Planck scale cells)
then it would fit into above guess about future of physics (TOE) perfectly.

Monday, July 31, 2017

Physics Of Star Trek

I saw maybe all Star Trek TV show episodes and movies.
Below I will try to provide more plausible ways of realizing similar technologies according to known laws of physics of our Universe.
I do not know if similar explanations were provided by anyone before.

Super Energy Sources:
They could be portable fusion reactors which are almost perfectly efficient.
They could provide continuous power (similar to DC) or as repeating pulses (similar to AC).
There maybe super batteries that store a dense cloud of electron gas in vacuum (or as a BEC?)?

Stun guns:
Imagine a super powerful gun creates conductive paths in air using UV pulse/continuous lasers, momentarily.
It sends a powerful electroshock to the target from those conductive paths.
(I think this tech is already developing currently.)

Teleportation:
Imagine two teleportation machines (chambers).
The sender machine creates some kind of quantum shock wave that instantly destroys the target object into gamma photons that carry the same quantum information.
That information sent to the receiver machine which has a giant BEC (that is made of same kind of atoms/molecules with same proportions as the target object?).
When the information is applied to the BEC (instantly, like a quantum shock wave), it somehow instantly quantum mechanically collapses into an exact copy of the object.

Phasers:
Instantly destroys the target object using similar quantum shock wave that used in teleportation.
(Target object instantly gets destroyed similar to teleportation, but there is no receiver for its quantum information.)

Artificial Gravity:
Imagine if we had small coils that can create high level positive/negative spacetime curvatures around them (spherical/cylindrical).
We could place a grid of those coils under floors etc to create artificial gravity.

Force Fields:
Imagine if we created spherical/cylindrical spaceships that covered by a dense grid of (+/-) gravity coils,
and also a dense grid of (superconductor) coils that can create (+/-) electric/magnetic fields.
Would not be possible to use them to create "force fields" all around the spaceships to deflect any (atom/particle/photon) kind of attack?

Cloaking Fields:
Imagine if we created spherical/cylindrical spaceships that covered by a dense grid of (+/-) gravity coils.
Would not be possible to use them to create a photon deflection field all around the spaceships?

Warp Speed:
Imagine if we created spherical/cylindrical spaceships that covered by a dense grid of (+/-) gravity coils.
Would not be possible to use them to create a warp bubble all around the spaceships to act like an Alcubierre Drive?

Sub-space Communication:
(Since we assume we have ability to manipulate the curvature of spacetime)
Imagine we have tech to create micro worm holes as twins and able to trap them indefinitely.
A communication signal enters to either one and instantly comes out of the other one.
Each time we create a new set of twin micro worm holes, we keep one in a central hub on Earth,
and the other carried by a spaceship or placed on a different planet/moon/space station.
(The same tech could also be useful to create and trap micro Black Holes, which maybe useful as compact batteries.)

Electronic Dampening Field:
Imagine EMP created like a standing wave using a grid of phased array EMP generators.

Spaceships with hulls that can withstand against almost any kind of attacks at least for a while if necessary:
How about metallic hydrogen or another solid material that we created using ultrapressure (and temperature)?

I think it is also clear that Star Trek Physics require devices with ability to create strong positive and negative spacetime curvatures for sure.
How could it work according to laws and limitations of known physics, assuming they are always must be obeyed?

According to General Relativity, spacetime bends in the presence of positive or negative mass/energy(/pressure/acceleration).

What if we destroyed a small amount of matter/antimatter in a spot (as pulses)?

(Could there be an economical way to create as much as antimatter as we need? Think about how we could easily induce a permanent magnet to permanently switch its N and S sides, by momentarily creating a strong enough reverse magnetic field using an electromagnet.
Could there be any way to create a special quantum field/shockwave (using an electric and/or magnetic field generator or a laser?)
that when it passes thru a sample of matter (trapped in mid-vacuum), it induces that matter to instantly switch to antimatter (so that instantly all electrons switch to positrons, all protons to anti-protons, all neutrons to anti-neutrons)?)

What if we created an arbitrarily strong volume/spot of magnetic and/or electric field(s)?

What if we created a spot of ultrapressure using a tech way beyond any diamond anvil?

What if we created a spot of negative ultrapressure (by using pulling force)?
(Imagine if we had or created a (solid?) material that is ultrastrong against pulling force (even for a moment)?)

What if we had or created an ultrastrong (solid?) disk/sphere/ring and trapped it in mid-vacuum.
Later we created an ultrapowerful rotational force on it (even for a moment) using ultrapowerful magnetic field.
So that the object gained (even for a moment) an ultrahigh speed and/or positive/negative acceleration?

Sunday, July 30, 2017

3D VOLUME SCANNER IDEA

I recently learned about an innovative method to get 3D scans of objects. It overcomes line of sight problem and captures the inner shape of the object also. It looks like a robot arm dips the object into water in different orientations. Each time how water level changed over time gets measured and from these measurements 3d object shape is calculated like a CAT scan.

I think these method can be improved upon greatly as follows:

Imagine we put a tight metal wire ring around the object we want to scan, maybe using a separate machine.
It could be a bendable but rigid, steel wire ring, or maybe bicycle wire ring, could be even a suitable kind of plastic.
The object could be in any orientation, hold tight by the ring.

Imagine we have an aquarium tank filled with liquid mercury
(which would keep the object dry unlike water, and also tank walls so that measurements would be more precise).
(Also mercury is conductive which would also make measurements easier using electronic sensor(s).)
(It could also be a cylindrical tank.)

Imagine inside of the tank we have a vertical bar that can move up and down a horizontal bar using electronic control.
Imagine that horizontal bar at its middle (down side) has a hook/lock for the wire ring (around the object).
That hook/lock has an electronically controlled motor that can rotate the wire ring (so the object) to any (vertical) angle.
(To prevent the ring/object moving like a pendulum when it is dipped into liquid (fast) each time, we could add a second horizontal bar with adjustable height, that has a hook/lock for the wire ring at its middle (up side). So the ring would be hold in place from its top and bottom points by two horizontal bars.)

Now imagine to take new measurements each time, we rotate the object a small and equal angular amount (within 360 degrees).
Then we dip the object fully inside the liquid (at constant speed) and take it out fully back (at constant speed).
Every time as we dip the object we record the changes in the liquid level in the tank over time.
(While the object fully dipped we could rotate it again and then record liquid level changes while we take the object fully out back
to get two sets of measurements at each cycle, instead of one.)

Of course mercury is highly toxic and reacts with some metals.
So it would be best to find a better liquid.
The liquid would need to be non-stick to keep scanned objects, tank walls dry. Minimal viscosity and density as possible, maximal temperature range with linear volume change based on temperature, constant volume under common different air pressures would be better. Stable (non-chemically active) and non-toxic are must.
Also electric conductivity would be a plus.

References:
https://www.sciencedaily.com/releases/2017/07/170721131954.htm
http://www.fabbaloo.com/blog/2017/7/25/water-displacement-3d-scanning-will-this-work
https://3dprintingindustry.com/news/3d-scanning-objects-dipping-water-118886/

Saturday, July 29, 2017

A Simple Derivation of General Relativity

According to Einstein's equivalence principle, a person accelerating upwards in an elevator (in outer space with no gravity) cannot distinguish it from gravity (downwards). Then acceleration and gravity are physically equivalent.

Assume a (laser) light send horizontally from one side (wall) of the elevator to other side (wall).

What is the Y coordinate of the beam for given X or T, if upwards constant speed of elevator is V?
x=c*t (assuming x is positive towards right)
y=v*t (assuming y is positive downwards)
m=y/x=(v*t)/(c*t)=v/c
Applying parametric to implicit conversion:
x=c*t => t=x/c => y=v*(x/c)=(v/c)*x=m*x => line with tangent m

What is the Y coordinate of the beam for given X or T, if upwards constant acceleration of elevator is A?
x=c*t (assuming x is positive towards right)
y=a*t^2 (assuming y is positive downwards)
Applying parametric to implicit conversion:
x=c*t => t=x/c => y=a*(x/c)^2=(a/c^2)*x^2 (parabola)
Geometry says:
if a parabola is y=x^2/(4*f) => f: focal length
The focal length of a parabola is half of its radius of curvature at its vertex => f=r/2
The radius of curvature is the reciprocal of the curvature (curvature of circle: 1/r)
Then:
y=(a/c^2)*x^2=x^2/(4*f) => a/c^2=1/(4*f) => 4*f*a/c^2=1 => f=c^2/(4*a)
r=2*f=c^2/(2*a) => curvature=1/r=1/(c^2/(2*a))=(1/1)/(c^2/(2*a))=(1/1)*((2*a)/c^2)=(2*a)/c^2
Newton's laws say: F=G*M*m/d^2 and F=m*a => Acceleration for unit mass in gravitational field of mass m:
a=F/m=F/1=G*M*1/d^2=G*M/d^2
Then:
curvature=(2*a)/c^2=(2*G*M/d^2)/c^2=2*G*M/c^2/d^2

Is this formula to calculate spacetime curvature correct (using mass of the object (star, planet etc) and distance from its gravitational center)? I have no idea. I searched online to find a similar formula to compare but could not found it.
If the formula is wrong I would like to know its correct expression (using same input variables M and d) of course. And also then, if it is possible to derive that formula from the same thought experiment.



Monday, July 17, 2017

What Is Spacetime?

First assume there is an ND uniform matrix (like a crystal) cellular automata quantum computer (UCAQC) where each of its cells are Planck length size and made of M qubits (like a register (set)).
Assume our universe is a bubble/ball of information (energy) expanding in that matrix.
Assume time step of UCAQC is Planck time (which leads to speed of light being the ultimate speed).
Assume each particle of Standard Model is a ball/cluster/packet of information moving around.
Assume when two (or more) particles collide, they temporarily create a combined information (energy) ball that is unstable because (for some reason) only the particles of Standard Model is allowed, so the newly created unstable particle is forced to decay/divide into a set of particles allowed by Standard Model.
Naturally existence of a Newtonian spacetime is easy to explain for such a universe.
(Also realize it is naturally compatible with quantum mechanics.)
But how about Relativity?
I think Special Relativity is because flow of information about events is limited by speed of light for all observers.
A thought experiment:
Imagine we have a spaceship in Earth's orbit that sends blue laser to a receiver on the ground.
Imagine the spaceship starts moving away from Earth with its speed keep increasing towards speed of light.
Imagine it reaches a speed so that its laser light looks red to us and to our measurement instruments.
(Because of Special Relativity.)
Realize that an observer on the spaceship would still see blue laser photons leaving the device.
But an observer on the ground sees and measures red laser photons.
The question is, are the laser photons actually lost energy?
Are they really blue (higher energy) or red (lower energy) photons?
Cannot we say they are actually blue photons, same as when they were created, but we see/detect them as red photons because of our relative (observer) motion.
What is really happening is same as how Doppler Effect changes frequency of sounds.
Different observers see photons with different energies because density of information flow is different for each observer,
even though speed of information flow is the same (speed of light) for all observers.
That is why I do not think expansion of the universe actually cause photons to lose energy.
I think all photons stay the same as when they were created, but they can be perceived with different energies by different observers.
(So when we measure energy of a photon, we actually measure its information density; not its total information (which is constant and equal for all photons).)
Similarly, I think (positive) spacetime curvature around objects with mass, compresses Compton wavelength of all particles present.
In case of Black Holes, Compton wavelength of a particle gets compressed as it approaches the event horizon.
Upon reaching the event horizon, the wavelength drops to Planck length and you get Planck particles (which is I think what Black Holes are made of).

Friday, July 14, 2017

Universal Cellular Automata Quantum Computer

If Universe is a qubit-based CA quantum computer operating in Planck scale, how it can explain QM and Relativity?

Human mind operating like a quantum computer (software) can explain Observer Effect:
Because of quantum information exchanges between qubits of experiment and qubits of mind of observer(s), like operations in a quantum computer.

Particles of the Standard Model (6 quarks + 6 leptons + 4 gauge bosons + 1 Higgs boson) + Planck particle can be explained as clusters (spherical?) of information.
(Then using the list of quantum properties common to all particles (like energy, mass, charge, spin, ?), it maybe possible to determine how many qubits (at least) for each (Planck size) cell of the universe CA quantum computer.)
(How particle interactions can be explained?)

It can also explain Relativity because speed of light limit is because of (constant) speed of information transmission of the Universe CA quantum computer.
So each observer can receive information only at speed of light (constant). Like non-moving and moving observers watching the same events would disagree on how fast events unfolding, because each can receive information (light) generated from the events in same speed but with different information flow density (frequency).

Gravity can be explained as an entropic force.

The Big Bang can be explained as, initially creating a ball of (maximum) dense information (energy) in the center of the Universal (CA) Quantum Computer (UCAQC).
Imagine there is a tendency of information flow from more dense to less dense volumes of UCAQC, and it causes the expansion of the universe.
I think in the beginning times of the Big Bang, this expansion force should be at its most powerful but later it would drop.
It could be that:
F = U * (1 - V / W)
Where:
F: Expansion Force at time t after Big Bang
U: an unknown constant
V: Volume of Universe Information Ball at time t after Big Bang
W: Max Possible Volume of Universe Information Ball (at time infinity after Big Bang)
Or maybe the expansion force /speed could be depending on the current (uniform) curvature of V.
(I had explained how to calculate universal (uniform) curvature in one of my previous blog posts.)
(But either case, it would mean there is really no such thing as Dark Energy and neither a universal field of inflation.)

What Black Holes Are Made Of 2

I know that many physicists believe today that infinities are a sign of breaking of a physical theory. I think the same. So I don’t think BHs have a singularity in their center so that means they must be made of some kind of particle. And I think there is only one particle that fits the bill (and it does perfectly). It is a hypothetical particle called Planck particle. Its Wikipedia page was saying it already naturally shows up in physical equations/calculations.

Also I think BHs must be in some kind of fluid state, similar to Neutron stars.

For example, I remember reading complex numbers were showing up in solutions of polynomial equations long before they discovered.

Extreme speculation mode:
I know complex numbers are extremely useful in physics.
It could be said complex numbers are more powerful by being 2D, instead of 1D.
I think if universe is some kind of cellular automata (computer) operating in Planck scale,
it is quite possible its calculations done using quaternions(4D), octonions(8D), maybe even sedenions (16D).

Also if there are singularities in the centers of BHs, how it is possible singularities (objects of zero size) can be different from each other to create different sizes of BHs around them?
Or should be really accept properties like mass/energy is just an absolutely abstract number so that an object of zero size can contain them (just as pure information) no problem?

In case what I mean is unclear:
Your viewpoint is yes the theory does not apply in the center of BH but it still applies all around. (Or is it, the theory also apply in the center that is why we must accept the existence of a real singularity?)
But my viewpoint is that the theory breaks in the center and that means. what we think about the structure of BHs must be wrong completely. (Like trying to build a skyscraper on a really bad foundation.)

“What force stops your hypothetical high density ball collapsing into a singularity?”
That is exactly why I was suggesting BHs must be made of Planck particles.
From Wikipedia about Planck Particle:
“its Compton wavelength and Schwarzschild radius are about the Planck length”
Planck particles are smallest possible particles. Imagine if any particle is compressed in an unstoppable way, its Compton wavelength gets smaller and smaller and finally it is reduced Planck length, where it cannot get any smaller.

I think BHs being made of Planck particles is theoretically possible and it does not lead to any contradictions with neither Quantum Mechanics nor Relativity.
But I am not a physicist and I would like to see Ethan writing a post evaluating this idea. if possible.

(I had posted these above comments about a week ago here:
http://scienceblogs.com/startswithabang/2017/07/07/is-it-possible-to-pull-something-out-of-a-black-hole-synopsis/)

Ideas For Long Term Future Of Humanity


Can we move Mars (which is too cold today) closer to Sun to make is more hospitable for human life?
Could we slowdown orbital speed of Mars (to get it closer to Sun), by slowly changing orbits of selected asteroids(and comets?) to make them collide with Mars in a controlled way? (And if we continue doing that for hundreds of years.)
Colliding asteroids with Mars would also increase its mass, which is a good side effect because Mars is significantly smaller than Earth.
Colliding comets is even better because they would increase (surface) water content of Mars.
Since we know that Sun will get gradually hotter and bigger as it aged, here is an utterly insane long term plan to ensure distant future of humanity, assuming we will have the power to modify orbits of asteroids and comets such that we can make any of them collide with any planet in a controlled way so that we can increase or decrease size of the orbit of the planet (we would be also keep increasing the mass of the planet we bombard; also we could use available comets to provide extra water to target planet):

Imagine first we could bombard Mars until its climate and water content is good enough for humanity.
Then move humanity to Mars (or as much as we can),
then we could bombard Earth to increase the size of its orbit as much as we want/need.
And afterwards as the Sun keeps getting bigger/hotter,
we could keep moving humanity back and forth between Earth and Mars, and each time after we moved humanity to one of the planets, we could bombard the other planet to increase the size of its orbit as much as we want/need.

Potential problems would be, can we keep the orbits of all other planets still stable for long term,
and what are the limits of keep increasing the mass (and water) content of a planet we want to live in?
Also after how many times we moved humanity, we would run out of asteroids to use (and only can use comets)?
Could we still continue by using comets?
If so when we would run out of comets?
And if we also run out of comets, what would be the final mass (and water content) of Earth/Mars?
What would be the size of the orbit of Earth/Mars, and would there be any chance of moving humanity to any planet in any nearby star?
(Because I think if the size of the orbit is big enough, it could make it possible to come close to a suitable planet in a nearby star. Keep in mind we would prefer to save all humanity if possible.)

Another potential problem is, even if we added lots of water to Mars, how we would get a suitable atmosphere?
Assuming we have no electrical power production problem, maybe we could separate lots of water to oxygen and hydrogen gas, and release hydrogen gas to space.

But then can we live in a almost pure oxygen atmosphere?
Are the common rocks on Mars have enough nitrogen we could release to atmosphere?
Or is there any other suitable inert gas we could produce enough from the rocks?

But also how we could modify the orbits of almost any asteroid or comet?
I don’t think any kind of rocket fuel would be enough.
But assuming we can produce portable fusion power generators that can generate maybe something like megawatts for decades, it maybe possible to produce enough thrust in space using only electrical power in different ways.
One way I was thinking could be to create giant rotating electrical and/or magnetic fields around a spacecraft to swim thru sea of cosmic rays around (positive and negative charged particles) like a submarine.

Of course if we had technology to easily modify asteroid and comet orbits, it would also be useful for protecting humanity from any unwanted asteroid or comet impacts anywhere.

Also can any space habitat would be viable way to live for humanity indefinitely?
Wouldn’t it keep getting damaged by cosmic rays?
Could we always repair and protect it?

How about another crazy idea:
Could we build lots of giant towers on Earth where their tops are above the atmosphere of Earth?
If so and we also have tech to create efficient and powerful pure electric drives for space, maybe we could turn Earth itself to a mobile planet.

This maybe the craziest idea:
What if we turn Earth to a mobile planet and also bombard Mars with asteroids and comets and get it closer to Earth and also make Mars have similar amounts of water and oxygen atmosphere, and later also turn Mars to a mobile planet?
Then we would have two mobile planets to live and move anywhere, maybe even to nearby stars.
Then maybe we could keep creating more mobile planets everywhere we go in the universe.

(I had posted these above comments here about two weeks ago:
http://scienceblogs.com/startswithabang/2017/07/01/ask-ethan-could-we-save-the-earth-by-migrating-it-away-from-the-sun-synopsis)

Tuesday, July 4, 2017

Pascal's Wager

My reasoning below is completely hypothetical.
I wanted to try to define an objective approach.
I am not claiming these are the steps I ever followed myself, either.

Pascal's Wager implies that we should give serious consideration to question of whether or not believe in (any?) God(s).
Because the potential loss or gain could be infinite.

If we chose not to believe then I think there is nothing further to consider. Because we have our answer.

But let's say if we chose to believe then what? Which God(s) we should believe?

Then the question becomes what world religion(s) we should choose, isn't it? Because I think it is obvious is that not all religions are compatible with each other. So there is no way we could choose to believe all of them together to cover all available options.

Is there really any way to objectively compare all world religions to make a decision about which one to believe? How we could compare any two religions objectively?

I think first thing to do would be to gather available information about all world religions in a common comparable format. For that we could create a standard list of questions for all religions.

For each religion we could list:
Which God(s) we should believe and what are their powers and properties (like shape, size, age etc).
Do those God(s) want us to believe in them and offer rewards/punishments (finite/infinite)?
Are those God(s) would treat us with justice? Are they good?
What are their explanations for existence of universe and its creation; how universe works; why it was created?
Why and how humanity was created?
What are the descriptions for afterlife, life in hell, life in heaven?
Are there any serious logical inconsistencies or absolute physical impossibilities in their explanations/beliefs/claims?
How each religion sees all others (like, also okay to believe (now), was okay to believe in the past but not today, never okay to believe)?
How we should live our life (any kind of sacrifices needed?)

So after we collected info about all religions in an objectively comparable way,
would that be enough for each one of us to make a choice?

Assume each person on Earth examined our comparable information about all religions,
and somehow each and everyone completely understood the information,
and also agreed it is all completely objective statements about each and every religion.

I wonder what percentage of people would choose which religion and what their reasoning(s) would be for their choices.

That's it? Cannot we even try to choose a religion absolutely objectively?

I think for that we could try to approach the problem mathematically, like in game theory or in probability theory.
But still, can really any method of calculation (algorithm) provide a clear and objective answer without requiring any subjective input values?
I think the answer maybe no.

Monday, July 3, 2017

Solution of P versus NP Problem

I know that many people attempted to prove an answer for P versus NP problem.
Also know that it is one of the Seven Millennium Prize problems.

Here is my idea for a proof (possibly with some missing pieces):

Since quantum computers are theoretically capable of infinite calculations per time step (where min could be Planck time),
then we can say quantum computers are definitely more powerful than any equivalent regular computer.
But also there maybe some calculation algorithms where quantum computer cannot provide an answer any faster than a regular computer.
Then we could at least say for sure that:
computingpower(regularcomputer) <= computingpower(quantumcomputer)

If so then if we can prove that, no matter what calculation algorithm is used, a quantum computer cannot solve any of known NP-complete problems in polynomial time, then it would mean that solving any NP-complete problem in polynomial time requires a computer more powerful than any quantum computer.
And that would mean the answer for P versus NP problem is P < NP.
(Keep in mind we already know that, solving any one of NP-complete problems means solving all of them, because each of those problems can be translated to any other in polynomial time.)
(And as for what kind of computer can be more powerful than any quantum computer, keep in mind since a quantum computer is capable of infinite number of calculations at each time step, I think the only kind of computer, which would be more powerful, would be a computer that is capable of infinite number of calculations in zero time step. (Not even one time step, because remember quantum computer already can do that.)
So if solving NP-complete problems in polynimial time requires that kind of computer then the answer is P < NP, again.

I think the only crucial part of this proof is whether we can prove that, quantum computers are incapable of solving NP-complete problems in polynomial time, no matter what algorithm steps are used.

Since all are equivalent, how about we chose Travelling Salesman Problem (TSP) to solve using a quantum computer?

Assume we represented the input graph structure for the problem in the quantum computer in any way we want, like Adjacency list/matrix for example.
Then we could encode that input state using N registers with each register has M qubits.
And for representing the solution output we could use P registers with each has Q qubits.
And we want to set input registers as any given TSP input state and get the answer at least in polynomial number of time steps.
Realize that those polynomial number of time steps can be used following any algorithm we want.

If we look at how a quantum computer allows us to solve integer factoring problem, assuming my understanding is correct, our inputs are two quantum registers with unknown values.
Then we apply multiplication calculation steps and get an output of unknown value in a third quantum register.
(So unknown inputs and unknown calculation output.)
But quantum mechanics allows us to force output register to any certain value and get the input register values for certain, or the reverse, where we can force input registers to certain known values and get the output value for certain.

But realize that for TSP, if we force the output register(s) to the solution for the given input values, (assuming we know the solution already), then input register values cannot be already determined,
because the same (optimal) route can be the optimal solution for many different input register states.
So there is a one-to-many relationship between (optimal) solution state to possible input states.
So if we force a solution state to the output registers then input register values must be indeterminate.
That means quantum computer cannot solve TSP in reverse (unlike Integer Factorization Problem).
And I think this failure in one direction clearly says TSP is a harder problem than Integer Factorization Problem.

Also realize that the reason we can solve Integer Factorization Problem very fast using a quantum computer is because there is entanglement between input and output quantum registers.
(So we can force either input or output registers to any certain values we want and get the certain (an unique) answer for the other side.)
Entanglement requires one-to-one relationship between two sides (input/output or problem/answer) to work and it is a symmetric rule of quantum mechanics.

But also realize that we established above that TSP cannot possibly be solved in both directions using a quantum computer, and using any polynomial time algorithm steps. (Because output to input direction solution is not possible for sure.)
But we also know entanglement needs to work in both directions because of being a symmetric law of nature.

So I think these mean a quantum computer cannot solve TSP in polynomial time no matter what algorithm is used.
And that means solving TSP in polynomial time requires a computer more powerful than any quantum computer.
And that means P < NP.

Is there anything missing in this proof? (Since obviously I cannot see anything wrong with it myself.)

I think we established that no quantum algorithm (that uses entanglement) can solve TSP in polynomial time.
But what about possibility of a classical (non-quantum; no entanglement) algorithm solving it in polynomial time?
Then we need to ask if any classical algorithm can be converted to a quantum algorithm?
Is it really possible to have a classical algorithm that cannot be converted to any quantum algorithm?
Because since there is no quantum algorithm for TSP (that always runs in polynomial time), that means if there is any classical algorithm for TSP (that always runs in polynomial time), it should be impossible to convert that algorithm to a quantum algorithm.
I do not think existence of such algorithms is possible but also I do not have any proof for this claim and I do not know if such proof already exists or not.

Also realize that (assuming what is above is true), if we want an encryption algorithm that cannot be broken by any quantum computer,
that means it needs to be based on an NP problem like TSP, instead of a problem like Integer Factorization.

In above argument we assumed that a quantum computer cannot solve TSP (in polynomial time steps), because if we have the output (solution) we cannot use it to get the input problem state for it
because the input state may not be unique so the input register qubits could not know what bit states to choose. (And so they would stay indeterminate.) But what if that assumption is wrong? What if the input state registers would be set to a solution picked from all possible solutions for that certain output state (with equal probability for each)? Is not that mean it maybe possible to have a polynomial time solution in both directions (input to output and output to input)? I think the total number of possible (valid) input states for a certain given output state would be very large often.
Realize that in TSP problem what we really have is a certain input state and we want to find the optimal solution (output state) for it. If we have a candidate output state and we want to see if that is the optimal solution for our certain input state, and each time we try (set output register(s) to the candidate output state) we get a randomly picked possible valid input state, then we may need to try that so many times until we get the input state matching to what we were trying to solve for. So I do not think we could have a polynomial time solution. Which I think means quantum computer still should be considered unable to solve the problem in both ways.