20170731

Physics Of Star Trek

I saw maybe all Star Trek TV show episodes and movies.
Below I will try to provide more plausible ways of realizing similar technologies according to known laws of physics of our Universe.
I do not know if similar explanations were provided by anyone before.

Super Energy Sources:
They could be portable fusion reactors which are almost perfectly efficient.
They could provide continuous power (similar to DC) or as repeating pulses (similar to AC).
There maybe super batteries that store a dense cloud of electron gas in vacuum (or as a BEC?)?

Stun guns:
Imagine a super powerful gun creates conductive paths in air using UV pulse/continuous lasers, momentarily.
It sends a powerful electroshock to the target from those conductive paths.
(I think this tech is already developing currently.)

Teleportation:
Imagine two teleportation machines (chambers).
The sender machine creates some kind of quantum shock wave that instantly destroys the target object into gamma photons that carry the same quantum information.
That information sent to the receiver machine which has a giant BEC (that is made of same kind of atoms/molecules with same proportions as the target object?).
When the information is applied to the BEC (instantly, like a quantum shock wave), it somehow instantly quantum mechanically collapses into an exact copy of the object.

Phasers:
Instantly destroys the target object using similar quantum shock wave that used in teleportation.
(Target object instantly gets destroyed similar to teleportation, but there is no receiver for its quantum information.)

Artificial Gravity:
Imagine if we had small coils that can create high level positive/negative spacetime curvatures around them (spherical/cylindrical).
We could place a grid of those coils under floors etc to create artificial gravity.

Force Fields:
Imagine if we created spherical/cylindrical spaceships that covered by a dense grid of (+/-) gravity coils,
and also a dense grid of (superconductor) coils that can create (+/-) electric/magnetic fields.
Would not be possible to use them to create "force fields" all around the spaceships to deflect any (atom/particle/photon) kind of attack?

Cloaking Fields:
Imagine if we created spherical/cylindrical spaceships that covered by a dense grid of (+/-) gravity coils.
Would not be possible to use them to create a photon deflection field all around the spaceships?

Warp Speed:
Imagine if we created spherical/cylindrical spaceships that covered by a dense grid of (+/-) gravity coils.
Would not be possible to use them to create a warp bubble all around the spaceships to act like an Alcubierre Drive?

Sub-space Communication:
(Since we assume we have ability to manipulate the curvature of spacetime)
Imagine we have tech to create micro worm holes as twins and able to trap them indefinitely.
A communication signal enters to either one and instantly comes out of the other one.
Each time we create a new set of twin micro worm holes, we keep one in a central hub on Earth,
and the other carried by a spaceship or placed on a different planet/moon/space station.
(The same tech could also be useful to create and trap micro Black Holes, which maybe useful as compact batteries.)

Electronic Dampening Field:
Imagine EMP created like a standing wave using a grid of phased array EMP generators.

Spaceships with hulls that can withstand against almost any kind of attacks at least for a while if necessary:
How about metallic hydrogen or another solid material that we created using ultrapressure (and temperature)?

I think it is also clear that Star Trek Physics require devices with ability to create strong positive and negative spacetime curvatures for sure.
How could it work according to laws and limitations of known physics, assuming they are always must be obeyed?

According to General Relativity, spacetime bends in the presence of positive or negative mass/energy(/pressure/acceleration).

What if we destroyed a small amount of matter/antimatter in a spot (as pulses)?

(Could there be an economical way to create as much as antimatter as we need? Think about how we could easily induce a permanent magnet to permanently switch its N and S sides, by momentarily creating a strong enough reverse magnetic field using an electromagnet.
Could there be any way to create a special quantum field/shockwave (using an electric and/or magnetic field generator or a laser?)
that when it passes thru a sample of matter (trapped in mid-vacuum), it induces that matter to instantly switch to antimatter (so that instantly all electrons switch to positrons, all protons to anti-protons, all neutrons to anti-neutrons)?)

What if we created an arbitrarily strong volume/spot of magnetic and/or electric field(s)?

What if we created a spot of ultrapressure using a tech way beyond any diamond anvil?

What if we created a spot of negative ultrapressure (by using pulling force)?
(Imagine if we had or created a (solid?) material that is ultrastrong against pulling force (even for a moment)?)

What if we had or created an ultrastrong (solid?) disk/sphere/ring and trapped it in mid-vacuum.
Later we created an ultrapowerful rotational force on it (even for a moment) using ultrapowerful magnetic field.
So that the object gained (even for a moment) an ultrahigh speed and/or positive/negative acceleration?

20170730

3D VOLUME SCANNER IDEA

I recently learned about an innovative method to get 3D scans of objects. It overcomes line of sight problem and captures the inner shape of the object also. It looks like a robot arm dips the object into water in different orientations. Each time how water level changed over time gets measured and from these measurements 3d object shape is calculated like a CAT scan.

I think these method can be improved upon greatly as follows:

Imagine we put a tight metal wire ring around the object we want to scan, maybe using a separate machine.
It could be a bendable but rigid, steel wire ring, or maybe bicycle wire ring, could be even a suitable kind of plastic.
The object could be in any orientation, hold tight by the ring.

Imagine we have an aquarium tank filled with liquid mercury
(which would keep the object dry unlike water, and also tank walls so that measurements would be more precise).
(Also mercury is conductive which would also make measurements easier using electronic sensor(s).)
(It could also be a cylindrical tank.)

Imagine inside of the tank we have a vertical bar that can move up and down a horizontal bar using electronic control.
Imagine that horizontal bar at its middle (down side) has a hook/lock for the wire ring (around the object).
That hook/lock has an electronically controlled motor that can rotate the wire ring (so the object) to any (vertical) angle.
(To prevent the ring/object moving like a pendulum when it is dipped into liquid (fast) each time, we could add a second horizontal bar with adjustable height, that has a hook/lock for the wire ring at its middle (up side). So the ring would be hold in place from its top and bottom points by two horizontal bars.)

Now imagine to take new measurements each time, we rotate the object a small and equal angular amount (within 360 degrees).
Then we dip the object fully inside the liquid (at constant speed) and take it out fully back (at constant speed).
Every time as we dip the object we record the changes in the liquid level in the tank over time.
(While the object fully dipped we could rotate it again and then record liquid level changes while we take the object fully out back
to get two sets of measurements at each cycle, instead of one.)

Of course mercury is highly toxic and reacts with some metals.
So it would be best to find a better liquid.
The liquid would need to be non-stick to keep scanned objects, tank walls dry. Minimal viscosity and density as possible, maximal temperature range with linear volume change based on temperature, constant volume under common different air pressures would be better. Stable (non-chemically active) and non-toxic are must.
Also electric conductivity would be a plus.

References:
https://www.sciencedaily.com/releases/2017/07/170721131954.htm
http://www.fabbaloo.com/blog/2017/7/25/water-displacement-3d-scanning-will-this-work
https://3dprintingindustry.com/news/3d-scanning-objects-dipping-water-118886/

20170729

A Simple Derivation of General Relativity

According to Einstein's equivalence principle, a person accelerating upwards in an elevator (in outer space with no gravity) cannot distinguish it from gravity (downwards). Then acceleration and gravity are physically equivalent.

Assume a (laser) light send horizontally from one side (wall) of the elevator to other side (wall).

What is the Y coordinate of the beam for given X or T, if upwards constant speed of elevator is V?
x=c*t (assuming x is positive towards right)
y=v*t (assuming y is positive downwards)
m=y/x=(v*t)/(c*t)=v/c
Applying parametric to implicit conversion:
x=c*t => t=x/c => y=v*(x/c)=(v/c)*x=m*x => line with tangent m

What is the Y coordinate of the beam for given X or T, if upwards constant acceleration of elevator is A?
x=c*t (assuming x is positive towards right)
y=a*t^2 (assuming y is positive downwards)
Applying parametric to implicit conversion:
x=c*t => t=x/c => y=a*(x/c)^2=(a/c^2)*x^2 (parabola)
Geometry says:
if a parabola is y=x^2/(4*f) => f: focal length
The focal length of a parabola is half of its radius of curvature at its vertex => f=r/2
The radius of curvature is the reciprocal of the curvature (curvature of circle: 1/r)
Then:
y=(a/c^2)*x^2=x^2/(4*f) => a/c^2=1/(4*f) => 4*f*a/c^2=1 => f=c^2/(4*a)
r=2*f=c^2/(2*a) => curvature=1/r=1/(c^2/(2*a))=(1/1)/(c^2/(2*a))=(1/1)*((2*a)/c^2)=(2*a)/c^2
Newton's laws say: F=G*M*m/d^2 and F=m*a => Acceleration for unit mass in gravitational field of mass m:
a=F/m=F/1=G*M*1/d^2=G*M/d^2
Then:
curvature=(2*a)/c^2=(2*G*M/d^2)/c^2=2*G*M/c^2/d^2

Is this formula to calculate spacetime curvature correct (using mass of the object (star, planet etc) and distance from its gravitational center)? I have no idea. I searched online to find a similar formula to compare but could not found it.
If the formula is wrong I would like to know its correct expression (using same input variables M and d) of course. And also then, if it is possible to derive that formula from the same thought experiment.



20170717

What Is Spacetime?

First assume there is an ND uniform matrix (like a crystal) cellular automata quantum computer (UCAQC) where each of its cells are Planck length size and made of M qubits (like a register (set)).
Assume our universe is a bubble/ball of information (energy) expanding in that matrix.
Assume time step of UCAQC is Planck time (which leads to speed of light being the ultimate speed).
Assume each particle of Standard Model is a ball/cluster/packet of information moving around.
Assume when two (or more) particles collide, they temporarily create a combined information (energy) ball that is unstable because (for some reason) only the particles of Standard Model is allowed, so the newly created unstable particle is forced to decay/divide into a set of particles allowed by Standard Model.
Naturally existence of a Newtonian spacetime is easy to explain for such a universe.
(Also realize it is naturally compatible with quantum mechanics.)
But how about Relativity?
I think Special Relativity is because flow of information about events is limited by speed of light for all observers.
A thought experiment:
Imagine we have a spaceship in Earth's orbit that sends blue laser to a receiver on the ground.
Imagine the spaceship starts moving away from Earth with its speed keep increasing towards speed of light.
Imagine it reaches a speed so that its laser light looks red to us and to our measurement instruments.
(Because of Special Relativity.)
Realize that an observer on the spaceship would still see blue laser photons leaving the device.
But an observer on the ground sees and measures red laser photons.
The question is, are the laser photons actually lost energy?
Are they really blue (higher energy) or red (lower energy) photons?
Cannot we say they are actually blue photons, same as when they were created, but we see/detect them as red photons because of our relative (observer) motion.
What is really happening is same as how Doppler Effect changes frequency of sounds.
Different observers see photons with different energies because density of information flow is different for each observer,
even though speed of information flow is the same (speed of light) for all observers.
That is why I do not think expansion of the universe actually cause photons to lose energy.
I think all photons stay the same as when they were created, but they can be perceived with different energies by different observers.
(So when we measure energy of a photon, we actually measure its information density; not its total information (which is constant and equal for all photons).)
Similarly, I think (positive) spacetime curvature around objects with mass, compresses Compton wavelength of all particles present.
In case of Black Holes, Compton wavelength of a particle gets compressed as it approaches the event horizon.
Upon reaching the event horizon, the wavelength drops to Planck length and you get Planck particles (which is I think what Black Holes are made of).

20170714

Universal Cellular Automata Quantum Computer

If Universe is a qubit-based CA quantum computer operating in Planck scale, how it can explain QM and Relativity?

Human mind operating like a quantum computer (software) can explain Observer Effect:
Because of quantum information exchanges between qubits of experiment and qubits of mind of observer(s), like operations in a quantum computer.

Particles of the Standard Model (6 quarks + 6 leptons + 4 gauge bosons + 1 Higgs boson) + Planck particle can be explained as clusters (spherical?) of information.
(Then using the list of quantum properties common to all particles (like energy, mass, charge, spin, ?), it maybe possible to determine how many qubits (at least) for each (Planck size) cell of the universe CA quantum computer.)
(How particle interactions can be explained?)

It can also explain Relativity because speed of light limit is because of (constant) speed of information transmission of the Universe CA quantum computer.
So each observer can receive information only at speed of light (constant). Like non-moving and moving observers watching the same events would disagree on how fast events unfolding, because each can receive information (light) generated from the events in same speed but with different information flow density (frequency).

Gravity can be explained as an entropic force.

The Big Bang can be explained as, initially creating a ball of (maximum) dense information (energy) in the center of the Universal (CA) Quantum Computer (UCAQC).
Imagine there is a tendency of information flow from more dense to less dense volumes of UCAQC, and it causes the expansion of the universe.
I think in the beginning times of the Big Bang, this expansion force should be at its most powerful but later it would drop.
It could be that:
F = U * (1 - V / W)
Where:
F: Expansion Force at time t after Big Bang
U: an unknown constant
V: Volume of Universe Information Ball at time t after Big Bang
W: Max Possible Volume of Universe Information Ball (at time infinity after Big Bang)
Or maybe the expansion force /speed could be depending on the current (uniform) curvature of V.
(I had explained how to calculate universal (uniform) curvature in one of my previous blog posts.)
(But either case, it would mean there is really no such thing as Dark Energy and neither a universal field of inflation.)

What Black Holes Are Made Of 2

I know that many physicists believe today that infinities are a sign of breaking of a physical theory. I think the same. So I don’t think BHs have a singularity in their center so that means they must be made of some kind of particle. And I think there is only one particle that fits the bill (and it does perfectly). It is a hypothetical particle called Planck particle. Its Wikipedia page was saying it already naturally shows up in physical equations/calculations.

Also I think BHs must be in some kind of fluid state, similar to Neutron stars.

For example, I remember reading complex numbers were showing up in solutions of polynomial equations long before they discovered.

Extreme speculation mode:
I know complex numbers are extremely useful in physics.
It could be said complex numbers are more powerful by being 2D, instead of 1D.
I think if universe is some kind of cellular automata (computer) operating in Planck scale,
it is quite possible its calculations done using quaternions(4D), octonions(8D), maybe even sedenions (16D).

Also if there are singularities in the centers of BHs, how it is possible singularities (objects of zero size) can be different from each other to create different sizes of BHs around them?
Or should be really accept properties like mass/energy is just an absolutely abstract number so that an object of zero size can contain them (just as pure information) no problem?

In case what I mean is unclear:
Your viewpoint is yes the theory does not apply in the center of BH but it still applies all around. (Or is it, the theory also apply in the center that is why we must accept the existence of a real singularity?)
But my viewpoint is that the theory breaks in the center and that means. what we think about the structure of BHs must be wrong completely. (Like trying to build a skyscraper on a really bad foundation.)

“What force stops your hypothetical high density ball collapsing into a singularity?”
That is exactly why I was suggesting BHs must be made of Planck particles.
From Wikipedia about Planck Particle:
“its Compton wavelength and Schwarzschild radius are about the Planck length”
Planck particles are smallest possible particles. Imagine if any particle is compressed in an unstoppable way, its Compton wavelength gets smaller and smaller and finally it is reduced Planck length, where it cannot get any smaller.

I think BHs being made of Planck particles is theoretically possible and it does not lead to any contradictions with neither Quantum Mechanics nor Relativity.
But I am not a physicist and I would like to see Ethan writing a post evaluating this idea. if possible.

(I had posted these above comments about a week ago here:
http://scienceblogs.com/startswithabang/2017/07/07/is-it-possible-to-pull-something-out-of-a-black-hole-synopsis/)

Ideas For Long Term Future Of Humanity


Can we move Mars (which is too cold today) closer to Sun to make is more hospitable for human life?
Could we slowdown orbital speed of Mars (to get it closer to Sun), by slowly changing orbits of selected asteroids(and comets?) to make them collide with Mars in a controlled way? (And if we continue doing that for hundreds of years.)
Colliding asteroids with Mars would also increase its mass, which is a good side effect because Mars is significantly smaller than Earth.
Colliding comets is even better because they would increase (surface) water content of Mars.
Since we know that Sun will get gradually hotter and bigger as it aged, here is an utterly insane long term plan to ensure distant future of humanity, assuming we will have the power to modify orbits of asteroids and comets such that we can make any of them collide with any planet in a controlled way so that we can increase or decrease size of the orbit of the planet (we would be also keep increasing the mass of the planet we bombard; also we could use available comets to provide extra water to target planet):

Imagine first we could bombard Mars until its climate and water content is good enough for humanity.
Then move humanity to Mars (or as much as we can),
then we could bombard Earth to increase the size of its orbit as much as we want/need.
And afterwards as the Sun keeps getting bigger/hotter,
we could keep moving humanity back and forth between Earth and Mars, and each time after we moved humanity to one of the planets, we could bombard the other planet to increase the size of its orbit as much as we want/need.

Potential problems would be, can we keep the orbits of all other planets still stable for long term,
and what are the limits of keep increasing the mass (and water) content of a planet we want to live in?
Also after how many times we moved humanity, we would run out of asteroids to use (and only can use comets)?
Could we still continue by using comets?
If so when we would run out of comets?
And if we also run out of comets, what would be the final mass (and water content) of Earth/Mars?
What would be the size of the orbit of Earth/Mars, and would there be any chance of moving humanity to any planet in any nearby star?
(Because I think if the size of the orbit is big enough, it could make it possible to come close to a suitable planet in a nearby star. Keep in mind we would prefer to save all humanity if possible.)

Another potential problem is, even if we added lots of water to Mars, how we would get a suitable atmosphere?
Assuming we have no electrical power production problem, maybe we could separate lots of water to oxygen and hydrogen gas, and release hydrogen gas to space.

But then can we live in a almost pure oxygen atmosphere?
Are the common rocks on Mars have enough nitrogen we could release to atmosphere?
Or is there any other suitable inert gas we could produce enough from the rocks?

But also how we could modify the orbits of almost any asteroid or comet?
I don’t think any kind of rocket fuel would be enough.
But assuming we can produce portable fusion power generators that can generate maybe something like megawatts for decades, it maybe possible to produce enough thrust in space using only electrical power in different ways.
One way I was thinking could be to create giant rotating electrical and/or magnetic fields around a spacecraft to swim thru sea of cosmic rays around (positive and negative charged particles) like a submarine.

Of course if we had technology to easily modify asteroid and comet orbits, it would also be useful for protecting humanity from any unwanted asteroid or comet impacts anywhere.

Also can any space habitat would be viable way to live for humanity indefinitely?
Wouldn’t it keep getting damaged by cosmic rays?
Could we always repair and protect it?

How about another crazy idea:
Could we build lots of giant towers on Earth where their tops are above the atmosphere of Earth?
If so and we also have tech to create efficient and powerful pure electric drives for space, maybe we could turn Earth itself to a mobile planet.

This maybe the craziest idea:
What if we turn Earth to a mobile planet and also bombard Mars with asteroids and comets and get it closer to Earth and also make Mars have similar amounts of water and oxygen atmosphere, and later also turn Mars to a mobile planet?
Then we would have two mobile planets to live and move anywhere, maybe even to nearby stars.
Then maybe we could keep creating more mobile planets everywhere we go in the universe.

(I had posted these above comments here about two weeks ago:
http://scienceblogs.com/startswithabang/2017/07/01/ask-ethan-could-we-save-the-earth-by-migrating-it-away-from-the-sun-synopsis)

20170704

Pascal's Wager

My reasoning below is completely hypothetical.
I wanted to try to define an objective approach.
I am not claiming these are the steps I ever followed myself, either.

Pascal's Wager implies that we should give serious consideration to question of whether or not believe in (any?) God(s).
Because the potential loss or gain could be infinite.

If we chose not to believe then I think there is nothing further to consider. Because we have our answer.

But let's say if we chose to believe then what? Which God(s) we should believe?

Then the question becomes what world religion(s) we should choose, isn't it? Because I think it is obvious is that not all religions are compatible with each other. So there is no way we could choose to believe all of them together to cover all available options.

Is there really any way to objectively compare all world religions to make a decision about which one to believe? How we could compare any two religions objectively?

I think first thing to do would be to gather available information about all world religions in a common comparable format. For that we could create a standard list of questions for all religions.

For each religion we could list:
Which God(s) we should believe and what are their powers and properties (like shape, size, age etc).
Do those God(s) want us to believe in them and offer rewards/punishments (finite/infinite)?
Are those God(s) would treat us with justice? Are they good?
What are their explanations for existence of universe and its creation; how universe works; why it was created?
Why and how humanity was created?
What are the descriptions for afterlife, life in hell, life in heaven?
Are there any serious logical inconsistencies or absolute physical impossibilities in their explanations/beliefs/claims?
How each religion sees all others (like, also okay to believe (now), was okay to believe in the past but not today, never okay to believe)?
How we should live our life (any kind of sacrifices needed?)

So after we collected info about all religions in an objectively comparable way,
would that be enough for each one of us to make a choice?

Assume each person on Earth examined our comparable information about all religions,
and somehow each and everyone completely understood the information,
and also agreed it is all completely objective statements about each and every religion.

I wonder what percentage of people would choose which religion and what their reasoning(s) would be for their choices.

That's it? Cannot we even try to choose a religion absolutely objectively?

I think for that we could try to approach the problem mathematically, like in game theory or in probability theory.
But still, can really any method of calculation (algorithm) provide a clear and objective answer without requiring any subjective input values?
I think the answer maybe no.

20170703

Solution of P versus NP Problem

I know that many people attempted to prove an answer for P versus NP problem.
Also know that it is one of the Seven Millennium Prize problems.

Here is my idea for a proof (possibly with some missing pieces):

Since quantum computers are theoretically capable of infinite calculations per time step (where min could be Planck time),
then we can say quantum computers are definitely more powerful than any equivalent regular computer.
But also there maybe some calculation algorithms where quantum computer cannot provide an answer any faster than a regular computer.
Then we could at least say for sure that:
computingpower(regularcomputer) <= computingpower(quantumcomputer)

If so then if we can prove that, no matter what calculation algorithm is used, a quantum computer cannot solve any of known NP-complete problems in polynomial time, then it would mean that solving any NP-complete problem in polynomial time requires a computer more powerful than any quantum computer.
And that would mean the answer for P versus NP problem is P < NP.
(Keep in mind we already know that, solving any one of NP-complete problems means solving all of them, because each of those problems can be translated to any other in polynomial time.)
(And as for what kind of computer can be more powerful than any quantum computer, keep in mind since a quantum computer is capable of infinite number of calculations at each time step, I think the only kind of computer, which would be more powerful, would be a computer that is capable of infinite number of calculations in zero time step. (Not even one time step, because remember quantum computer already can do that.)
So if solving NP-complete problems in polynimial time requires that kind of computer then the answer is P < NP, again.

I think the only crucial part of this proof is whether we can prove that, quantum computers are incapable of solving NP-complete problems in polynomial time, no matter what algorithm steps are used.

Since all are equivalent, how about we chose Travelling Salesman Problem (TSP) to solve using a quantum computer?

Assume we represented the input graph structure for the problem in the quantum computer in any way we want, like Adjacency list/matrix for example.
Then we could encode that input state using N registers with each register has M qubits.
And for representing the solution output we could use P registers with each has Q qubits.
And we want to set input registers as any given TSP input state and get the answer at least in polynomial number of time steps.
Realize that those polynomial number of time steps can be used following any algorithm we want.

If we look at how a quantum computer allows us to solve integer factoring problem, assuming my understanding is correct, our inputs are two quantum registers with unknown values.
Then we apply multiplication calculation steps and get an output of unknown value in a third quantum register.
(So unknown inputs and unknown calculation output.)
But quantum mechanics allows us to force output register to any certain value and get the input register values for certain, or the reverse, where we can force input registers to certain known values and get the output value for certain.

But realize that for TSP, if we force the output register(s) to the solution for the given input values, (assuming we know the solution already), then input register values cannot be already determined,
because the same (optimal) route can be the optimal solution for many different input register states.
So there is a one-to-many relationship between (optimal) solution state to possible input states.
So if we force a solution state to the output registers then input register values must be indeterminate.
That means quantum computer cannot solve TSP in reverse (unlike Integer Factorization Problem).
And I think this failure in one direction clearly says TSP is a harder problem than Integer Factorization Problem.

Also realize that the reason we can solve Integer Factorization Problem very fast using a quantum computer is because there is entanglement between input and output quantum registers.
(So we can force either input or output registers to any certain values we want and get the certain (an unique) answer for the other side.)
Entanglement requires one-to-one relationship between two sides (input/output or problem/answer) to work and it is a symmetric rule of quantum mechanics.

But also realize that we established above that TSP cannot possibly be solved in both directions using a quantum computer, and using any polynomial time algorithm steps. (Because output to input direction solution is not possible for sure.)
But we also know entanglement needs to work in both directions because of being a symmetric law of nature.

So I think these mean a quantum computer cannot solve TSP in polynomial time no matter what algorithm is used.
And that means solving TSP in polynomial time requires a computer more powerful than any quantum computer.
And that means P < NP.

Is there anything missing in this proof? (Since obviously I cannot see anything wrong with it myself.)

I think we established that no quantum algorithm (that uses entanglement) can solve TSP in polynomial time.
But what about possibility of a classical (non-quantum; no entanglement) algorithm solving it in polynomial time?
Then we need to ask if any classical algorithm can be converted to a quantum algorithm?
Is it really possible to have a classical algorithm that cannot be converted to any quantum algorithm?
Because since there is no quantum algorithm for TSP (that always runs in polynomial time), that means if there is any classical algorithm for TSP (that always runs in polynomial time), it should be impossible to convert that algorithm to a quantum algorithm.
I do not think existence of such algorithms is possible but also I do not have any proof for this claim and I do not know if such proof already exists or not.

Also realize that (assuming what is above is true), if we want an encryption algorithm that cannot be broken by any quantum computer,
that means it needs to be based on an NP problem like TSP, instead of a problem like Integer Factorization.

In above argument we assumed that a quantum computer cannot solve TSP (in polynomial time steps), because if we have the output (solution) we cannot use it to get the input problem state for it
because the input state may not be unique so the input register qubits could not know what bit states to choose. (And so they would stay indeterminate.) But what if that assumption is wrong? What if the input state registers would be set to a solution picked from all possible solutions for that certain output state (with equal probability for each)? Is not that mean it maybe possible to have a polynomial time solution in both directions (input to output and output to input)? I think the total number of possible (valid) input states for a certain given output state would be very large often.
Realize that in TSP problem what we really have is a certain input state and we want to find the optimal solution (output state) for it. If we have a candidate output state and we want to see if that is the optimal solution for our certain input state, and each time we try (set output register(s) to the candidate output state) we get a randomly picked possible valid input state, then we may need to try that so many times until we get the input state matching to what we were trying to solve for. So I do not think we could have a polynomial time solution. Which I think means quantum computer still should be considered unable to solve the problem in both ways.

Quantum Computers and the Universe


I had read a lot about quantum computers for many years but never really understood how they actually work.
Even though I am someone who was interested in computers, science, technology since his early teenager years.
Now I think if I cannot understand how exactly quantum computers work, why not try to guess by myself.
(I have a computer science undergraduate degree from a US university with minor in math.)

What made quantum computers so popular was the arrival of the internet.
Its what is called "killer application" is what is called Shor's Algoritm (which I never understood either).
I think practically all encrypted private communication in the internet uses (RSA) Public-Key cryptography protocol.
What makes it almost unbreakable is that the mathematical fact that multiplying 2 very large positive integers can be done very fast,
but on the other hand doing the reverse (which is how to find those 2 multiplied (prime) numbers if we were given the multiplication result) is very hard (using normal computers and all algorithms we know).

Even though I never understood how quantum computers work, I think I understood the "magical" power of qubits always.
Unlike a regular computer bit (in its memory, which can be always either 0 or 1 as we set it, any time using computer processing instructions), a qubit is able to stay indeterminite between 0 and 1 states (as longs as we want?), until we query its value and get an anwer as 0 or 1.

Now assume we want to break RSA, I think our (main) problem is, we have an N digit binary positive integer which we know it was calculated by multiplying 2 very large (average N/2 digits each?) binary positive (both prime) integers.
(They were chosen to be prime (or to be prime with very high probability) I think because that makes the problem the hardest to solve.)

So the question is how Shor's Algorithm could be breaking the code?

I think similar to regular computers, quantum computers must have a set of possible instructions to write programs and do calculations.
I am guessing Shor's Algorithm maybe be getting executed using a quantum computer like this:
Asume we have enough number of qubits to use.
Assume we started with 2 (each N/2 digits) qubit based computer processor registers.
Assume we start with setting those two registers to all undetermined qubit states.
Then assume we run a single multiplication instruction to multiply values of those two registers and store the result in a third register.
(I think in regular computer processors multiplication is actually done in multiple calculation steps internally but the same sequence is triggered using a single instruction each time.)

When the multiplication is done and the result is stored, how do we get the values of the two input numbers?
I am guessing that a quantum computer must have an instruction for loading known bit values into its qubit registers.
But it must also provide a way to load bit values without actually erasing previous (and undetermined) bit values of each qubit in a register.
So what we are really talking about is, forcing previous undetermined value of each qubit to collapse into a 0 or 1 we choose.

But if we can do that, can't we use that for FTL communication using entanglement?
Because if have 2 entangled qubits (meaning if we make a measurement on any of them anytime anywhere and find its value of 0, we would know instantly, whenever the other one is measured its value will be found to be 1, and vice versa), and we have the capability to force either one of them to be measured as a 0 or a 1, then we could use one of the qubits to send a bit of information (we choose its value) instantly from one qubit to the other one, in either direction.

I know that this is a solved (and tricky) problem where quantum mechanics actually does not allow FTL communication.
(Which is also a rule compatible with Relativity.)
So I think quantum mechanics must be allowing us to force a qubit to any 0 or 1 value we want, but as long as we still don't know what the value of the other qubit (its twin?) will be.

Then how we may solve what is called Integer Factorization problem quickly using a quantum computer?

Assume we multipled two unknown positive integers (qubit register values), calculated the result in a third register.
Then we forced the third qubit register value to the multiplication result number (which we knew already).
Assume we still preserved the values of the input registers during the whole multiplication calculation maybe by copying them to another two separate qubit registers before the multiplication
(which would create entanglement in between copied register qubits).

I think if we forced the (multiplication) calculation result register to the value we know, then values of the two input value registers (which we preserved) should get set to the only possible input prime number values.
(Which we can measure (collapse), those input register qubits, anytime we want, and learn what were their (initially unknown) values.)

If quantum computers really work like this, can they also be explained by Hidden Variables Hypothesis?
I think it claims, when we measure the previously unknown value of a qubit, we just finding out what certain 0 or 1 value the qubit was set in the past.
Realize that if that was true then quantum computers would not work the way we need.
Whether the multiplication calculation result register was set this value or that, the input register values would not be effected by it, since their unknown (but certain) values would never change.

So it looks like quantum computers allow us to send information instantly across any distance in space (remember, the instant we (force) set any qubit of output register, that operation instantly sets the value of corresponding qubit(s) in the input registers), as long as that information is indeterminate.
But cannot we also think the input registers as the past and the output register as the future?
When we (force) set the output register in the end of the multiplication calculation, isn't that can be interpreted as sending information to the past (instantly)?
And if in the end, we (force) set the value of any input registers,
isn't that can be interpreted as sending information to the future (instantly)?
If so then these would mean we can send information across space and(/or?) time instantly but as long as it is indeterminate information.

I think quantum mechanics requires that if the universe is some kind of cellular automata,
then its cells must be individual qubits or a qubit register or a set of multiple qubit registers.
(Probably all cells identical for the whole universe.)
(Also each qubit cell would be connected with N neighbors probably.
Could it even be that all cells are directly connected with all other cells?)

Also since it looks like each quantum register of N qubits is capable of making a choice between 2^N different possible answers in an instant, it could be said that each quantum register is capable of 2^N calculations (at least) in an instant.
Then it would mean each qubit is capable of 2^N/N calculations per time step of the computer.
Since N can be large as we want in theory, that means each qubit is capable of infinite number of calculations (value evaluations) per time step.
(I think path integrals used to calculate particle actions also indicate that each particle seems to be evaluating infinite number of possibilities at each time step.)

Which brings the mind the question, are quantum computers the ultimate (most powerful) computers possible?
(Which are computers capable of infinite number of calculations at each time step, theoretically.)
I think practically the answer looks like yes (because of the quantum theory).
But I think theoretically an even more powerful computer maybe be possible (but definitely not in our universe) that does an infinite number of calculations in 0 time step.
(Which I think some religions imply about what is God is capable of,
by saying God can create anything of any size and complexity in an instant, without needing to spend any time on the problem.)

20170701

FLATLAND AND CURVATURE OF UNIVERSE

Wikipedia says "Flatland: A Romance of Many Dimensions is a satirical novella by the English schoolmaster Edwin Abbott Abbott, first published in 1884 by Seeley & Co. of London."
I never actually read it but I know it really helps people to understand geometric dimensions.

I do not know if these ideas below ever occurred to anyone:

I think it does not matter if flatland (universe) had no curvature at all anywhere
or it had a (positive or negative) uniform global (universal) curvature of any (constant) value,
flatland would look flat to flatland people (any observer living in flatland who has the same dimensions as flatland).

Now imagine that flatland is the 2D surface of a 3D sphere which has a uniform positive curvature everywhere (which is 1/r^2).
The question is this:
Can Flatland people really measure the curvature of their universe or not?
I think most people maybe assuming that, since sum of internal angles of a triangle in Flatland would be greater than 180 degrees,
Flatland people could easily measure the (global and uniform) curvature of their universe.
But can they really do that, just like a 3D being would easily see that
sum of internal angles of a triangle in Flatland is really greater than 180 degrees obviously?
I think the answer is no.
Imagine a Flatland observer sends a laser beam straight ahead.
Imagine the view of the Flatland observer is like a camera moving along the photons of the laser beam, in front of the beginning (head) of the beam.
Imagine as the beam and camera is moving both would be following the curvature of their universe on the path of the beam.
If there are stars in Flatland universe and laser beam is moving towards stars, the view of the camera would be always a flat universe.
Realize that if the universal curvature of Flatland universe is uniform everywhere,
the Flatland observer would always think their universe is flat.
And I think this would be still the same no matter how many dimensions the Flatland universe really has.
But also think Flatland people can still measure non-uniform curvatures in their universe, like curvature created by the mass of a star.

So I think it is quite natural that global curvature of our universe looks very close to being flat.

If our universe started with a Big Bang from a point (singularity or a small spherical object?),
and uniformly expanding ever since, and if we combine Occam's Razor with observations of our universe,
I think the simplest global geometry for our universe would be a 3D spherical surface on a 4D sphere.
And just like a 2D spherical surface is curved in 3rd dimension of space,
our universe must be 3D spherical surface of 3 space dimensions curved in 4th dimension (time).

If so that implies we can calculate the global curvature of our universe at any time as 1/r^3.
(Where r is the radius of our universe at that time.)

Wikipedia says distance to Big Bang is "13.799±0.021 billion years" in time.
But I think if we take expansion of the universe since the Big Bang into consideration,
distance in space (radius of universe) is currently about 46 billion light-years.
This implies the current global curvature of our universe must be 1/(46 billion light-years)^3.

As for how to make sense of our visible universe around us we observe:
Imagine when we look at any direction in our universe, depending on how far we look,
for each point in the universe, we see light left from that point, that far in time.
(And current actual distance (in space) to that point can be calculated by applying what we know for expansion of the universe.)