It is clear that when it comes to solving numerical search problems like Integer Factorization, quantum computers allow us to find the solution(s) instantly.

We just setup the problem (multiply two unknown integers and get an unknown integer result, set the unknown result to a result we want) and instantly the input integers become known.

So quantum computers are infinitely more powerful than regular computers for solving numerical search problems.

But we use regular computers also for symbolic calculation.

(CAS (Computer Algebra System) software like Mathematica, Maple etc.) What more quantum computers could provide when it comes to symbolic calculation?

I think they could provide the same benefit as for numerical calculation. Meaning instantly solving symbolic search problems.

Imagine if we could just setup an equation expression string as input, then quantum computer sets the output string (with unknown value) to a general solution expression (known value), if such solution really exists/possible.

For example:

1)

Input string: "a*x^0+b*x^1=0"

String value search problem: "x=?"

Output string: "-a/b"

2)

Input string: "a*x^0+b*x^1+c*x^2=0"

String value search problem: "x=?"

Output string: "(-b+(b^2-4*a*c)^(1/2))/(2*a)"

I think using quantum computers for symbolic calculation should allow us solving many important such problems which we cannot solve with regular computers in a practical time.

I am guessing those would even include some Millenium Prize Problems like finding (all) general solution expressions for Navier-Stokes equations (and proving Riemann Hypothesis?).

I think, assuming we will have a general purpose suitable quantum computer someday, only issue is figuring out exactly how to express and solve symbolic calculation problems like the two examples above.

Let's try to solve the first problem using a quantum computer:

Assuming quantum computer symbolic calculated the solution (expression string E), how we could test it to be correct or not?

How about creating an equation that would be true only if E is a valid solution, which is the input equation itself, then:

"a*E^0+b*E^1=0" or "a+b*E=0"

Then I think the solution algorithm for the quantum computer would be:

Start with unknown values E, a, b.

Calculate a+b*E (not numerical calculation but symbolic expression calculation, using an expression tree).

Set the unknown calculation result to 0.

Unknown string E collapses to the answer: "-a/b"

And if we consider how we could do the symbolic calculation step above using a regular computer, which requires manipulating an expression tree using stack(s), then we need figure out how to create a quantum stack using a quantum computer.

(Imagine a stack that can do any number of push/pop operations instantly, to collapse into its final known state instantly.)

(If we could do quantum stacks, then we also could do quantum queues.)

(And then quantum versions of other standard programming data structures would also be possible.)

What could be the most practical way to build a large scale quantum computer?

I think currently building a quantum computer is really hard because our physical world is highly noisy at quantum scale.

Imagine using single atoms/molecules as qubits.

Imagine cooling them close to absolute zero in vacuum environment that needs to be perfectly maintained.

Could there be a better way?

What if we create a quantum computer in a different level of reality, which does not have noise?

Think about our regular digital computers.

Could we think of the bit values in memory of a working regular computer, like a different level of reality of quasiparticles, which does not have noise?

Can we create an extrinsic-semiconductor-based quantum computer chip, that creates and processes qubits as quasiparticles?

(And the quantum computer designed and operated like a Cellular Automata, similar to Wireworld?)

https://en.wikipedia.org/wiki/Quasiparticle

https://en.wikipedia.org/wiki/Electron_hole

https://en.wikipedia.org/wiki/Extrinsic_semiconductor

https://en.wikipedia.org/wiki/Cellular_automaton

https://en.wikipedia.org/wiki/Wireworld

## Thursday, September 14, 2017

### Continuum Hypothesis is False

Continuum hypothesis states "There is no set whose cardinality is strictly between that of the integers and the real numbers".

Resolution:

Express each set in question, as a set of points on (ND) Euclidean space,

and calculate their fractal dimension to compare their cardinality =>

Set of all integers => Fractal Dimension=0

Set of all real numbers => Fractal Dimension=1

Set of all complex numbers => Fractal Dimension=2

Set of all quaternion numbers => Fractal Dimension=4

Set of all octonion numbers => Fractal Dimension=8

Set of all sedenion numbers => Fractal Dimension=16

Set of all points of a certain fractal => Fractal Dimension:

Cantor set: 0.6309

Koch curve: 1.2619

Sierpinski triangle: 1.5849

Sierpinski carpet: 1.8928

Pentaflake: 1.8617

Hexaflake: 1.7712

Hilbert curve: 2

Resolution:

Express each set in question, as a set of points on (ND) Euclidean space,

and calculate their fractal dimension to compare their cardinality =>

Set of all integers => Fractal Dimension=0

Set of all real numbers => Fractal Dimension=1

Set of all complex numbers => Fractal Dimension=2

Set of all quaternion numbers => Fractal Dimension=4

Set of all octonion numbers => Fractal Dimension=8

Set of all sedenion numbers => Fractal Dimension=16

Set of all points of a certain fractal => Fractal Dimension:

Cantor set: 0.6309

Koch curve: 1.2619

Sierpinski triangle: 1.5849

Sierpinski carpet: 1.8928

Pentaflake: 1.8617

Hexaflake: 1.7712

Hilbert curve: 2

## Tuesday, September 5, 2017

### EXPLAINING DARK ENERGY AND DARK MATTER

If Universe/Reality (U/R) is a Cellular Automata (CA) (Quantum Computer (QC)), operating at Planck Scale (PS), then how it could explain Dark Energy (DE) and Dark Matter (DM)?

Assume Quantum Physics (QP) is its first Macro Scale (MS) Emergent Property (EP), assume Relativity Physics (RP) is its second MS EP,

then Dark (Energy & Matter) Physics (DP) could be its third MS EP!

(Just like for example, Newton (Navier-Stokes) Physics (NP) is the first Macro Scale (MS) Emergent Property (EP) of some CA, like FHP and LBM.)

Is the ratio of DM to Matter (DM/M) is always (everywhere and everywhen) constant in the Universe?

Is the ratio of DE to Vacuum Energy (DE/VE) is always (everywhere and everywhen) constant in the Universe?

(If so, could they be a consequence of DP being what is said above?)

Is every EP has a finite scale range?

(Are fluid simulation CA (like FHP/LBM) have a second layer of EP at super-macro scale (where NP no longer apply)?)

Assume Quantum Physics (QP) is its first Macro Scale (MS) Emergent Property (EP), assume Relativity Physics (RP) is its second MS EP,

then Dark (Energy & Matter) Physics (DP) could be its third MS EP!

(Just like for example, Newton (Navier-Stokes) Physics (NP) is the first Macro Scale (MS) Emergent Property (EP) of some CA, like FHP and LBM.)

Is the ratio of DM to Matter (DM/M) is always (everywhere and everywhen) constant in the Universe?

Is the ratio of DE to Vacuum Energy (DE/VE) is always (everywhere and everywhen) constant in the Universe?

(If so, could they be a consequence of DP being what is said above?)

Is every EP has a finite scale range?

(Are fluid simulation CA (like FHP/LBM) have a second layer of EP at super-macro scale (where NP no longer apply)?)

## Wednesday, August 16, 2017

### NATURE OF TIME

Concept of “now” being relative implies unchanging 4D “Block Universe” (so future is predictable) and it comes from Relativity.

But QM says the opposite (future is unpredictable (only there is a certain probability for any future event)).

As we look at the Universe/reality starting at microscale (particle size) and go to macroscale, future events become more and more certain.

For example, think of how certain things you plan to do tomorrow: Can’t we say they are not perfectly certain but close?

But also think of how certain motion of Earth in its orbit tomorrow. Isn’t it much more certain (but still not perfectly certain)?

Future being unpredictable in microscale and later becoming more and more predictable at higher and higher scales also happens in Cellular Automata (which used for fluid simulation).

I think one clear implication of future becoming more and more predictable at higher and higher scales is that, time must be an emergent property.

Which in turn implies spacetime must be an emergent property.

Which in turn implies Relativity must be an emergent property.

I think I had read somewhere that equations of GR is similar to equations of some kind of (non-viscous?) fluid.

If so it would make sense considering Cellular Automata used for fluid simulation shows similar behavior to GR.

I just came across a part of an article from Scientific American September 2015 that says something very similar to what I had said about nature of time:

“Whenever people talk about a dichotomy, though, they usually aim to expose it as false. Indeed, many philosophers think it is meaningless to say whether the universe is deterministic or indeterministic. It can be either, depending on how big or complex your object of study is: particles, atoms, molecules, cells, organisms, minds, communities. “The distinction between determinism and indeterminism is a level-specific distinction,” says Christian List, a philosopher at the London School of Economics and Political Science. “If you have determinism at one particular level, it is fully compatible with indeterminism, both at higher levels and at lower levels.” The atoms in our brain can behave in a completely deterministic way while still giving us freedom of action because atoms and agency operate on different levels. Likewise, Einstein sought a deterministic subquantum level without denying that the quantum level was probabilistic.”

(All my comments above also published here:

http://scienceblogs.com/startswithabang/2017/08/13/comments-of-the-week-172-from-sodium-and-water-to-the-most-dangerous-comet-of-all/)

If the future (time) becomes more and more certain as we go from microscale to macroscale, here is a thought experiment for determining how exactly that happens:

Imagine in a vacuum chamber we dropped a single neutral Carbon atom from a certain height so many times and measured/determined how close it will hit the center of the target (circular) area with how much probability. And later we repeated the experiment with C60 molecules. And later we repeated the experiment with solid balls of 60 C60 molecules. And later we repeated the experiment with solid balls of 3600 C60 molecules. ...

I think what would happen is bigger and bigger solid balls would hit closer and closer to the center with higher and higher probabilities. And general graph (an exponential curve?) of the results would tell us how exactly future (time) becomes more and more certain.

A more advanced version of the thought experiment could be this:

Imagine we started the experiment with micro balls and with a very small drop height. And as the radius of the solid balls gets bigger and bigger, we increased the drop distance with the same size increase ratio as radius.

But QM says the opposite (future is unpredictable (only there is a certain probability for any future event)).

As we look at the Universe/reality starting at microscale (particle size) and go to macroscale, future events become more and more certain.

For example, think of how certain things you plan to do tomorrow: Can’t we say they are not perfectly certain but close?

But also think of how certain motion of Earth in its orbit tomorrow. Isn’t it much more certain (but still not perfectly certain)?

Future being unpredictable in microscale and later becoming more and more predictable at higher and higher scales also happens in Cellular Automata (which used for fluid simulation).

I think one clear implication of future becoming more and more predictable at higher and higher scales is that, time must be an emergent property.

Which in turn implies spacetime must be an emergent property.

Which in turn implies Relativity must be an emergent property.

I think I had read somewhere that equations of GR is similar to equations of some kind of (non-viscous?) fluid.

If so it would make sense considering Cellular Automata used for fluid simulation shows similar behavior to GR.

I just came across a part of an article from Scientific American September 2015 that says something very similar to what I had said about nature of time:

“Whenever people talk about a dichotomy, though, they usually aim to expose it as false. Indeed, many philosophers think it is meaningless to say whether the universe is deterministic or indeterministic. It can be either, depending on how big or complex your object of study is: particles, atoms, molecules, cells, organisms, minds, communities. “The distinction between determinism and indeterminism is a level-specific distinction,” says Christian List, a philosopher at the London School of Economics and Political Science. “If you have determinism at one particular level, it is fully compatible with indeterminism, both at higher levels and at lower levels.” The atoms in our brain can behave in a completely deterministic way while still giving us freedom of action because atoms and agency operate on different levels. Likewise, Einstein sought a deterministic subquantum level without denying that the quantum level was probabilistic.”

(All my comments above also published here:

http://scienceblogs.com/startswithabang/2017/08/13/comments-of-the-week-172-from-sodium-and-water-to-the-most-dangerous-comet-of-all/)

If the future (time) becomes more and more certain as we go from microscale to macroscale, here is a thought experiment for determining how exactly that happens:

Imagine in a vacuum chamber we dropped a single neutral Carbon atom from a certain height so many times and measured/determined how close it will hit the center of the target (circular) area with how much probability. And later we repeated the experiment with C60 molecules. And later we repeated the experiment with solid balls of 60 C60 molecules. And later we repeated the experiment with solid balls of 3600 C60 molecules. ...

I think what would happen is bigger and bigger solid balls would hit closer and closer to the center with higher and higher probabilities. And general graph (an exponential curve?) of the results would tell us how exactly future (time) becomes more and more certain.

A more advanced version of the thought experiment could be this:

Imagine we started the experiment with micro balls and with a very small drop height. And as the radius of the solid balls gets bigger and bigger, we increased the drop distance with the same size increase ratio as radius.

## Monday, August 7, 2017

### FUTURE OF PHYSICS

If we look at history of physics, is there a clear trend to allow us to guess its future?

What are the major milestones in physics history?

I think it could be said:

1) Ancient Greece (level) Physics

2) Galileo (level) Physics

3) Newton (level) Physics

4) Einstein (level) Physics

5) TOE (level) Physics(?)

I think there is indeed a clear trend if you think about it.

Each new revolution in physics brings something like an order of magnitude increase in complexity of math (calculations), not just a new theory.

So I would guess doing calculations to solve physics problems using TOE will be practically impossible using pen and paper only.

I think it will require a (quantum) computer.

(Realize that all physics problems (where answer is possible) can be solved today using non-quantum (super) computers/calculators/pen&paper.)

I think if Universe (or Reality) turns out to be a Cellular Automata design running on an ND matrix qubit (register) quantum computer (with Planck scale cells)

then it would fit into above guess about future of physics (TOE) perfectly.

What are the major milestones in physics history?

I think it could be said:

1) Ancient Greece (level) Physics

2) Galileo (level) Physics

3) Newton (level) Physics

4) Einstein (level) Physics

5) TOE (level) Physics(?)

I think there is indeed a clear trend if you think about it.

Each new revolution in physics brings something like an order of magnitude increase in complexity of math (calculations), not just a new theory.

So I would guess doing calculations to solve physics problems using TOE will be practically impossible using pen and paper only.

I think it will require a (quantum) computer.

(Realize that all physics problems (where answer is possible) can be solved today using non-quantum (super) computers/calculators/pen&paper.)

I think if Universe (or Reality) turns out to be a Cellular Automata design running on an ND matrix qubit (register) quantum computer (with Planck scale cells)

then it would fit into above guess about future of physics (TOE) perfectly.

## Monday, July 31, 2017

### Physics Of Star Trek

I saw maybe all Star Trek TV show episodes and movies.

Below I will try to provide more plausible ways of realizing similar technologies according to known laws of physics of our Universe.

I do not know if similar explanations were provided by anyone before.

Super Energy Sources:

They could be portable fusion reactors which are almost perfectly efficient.

They could provide continuous power (similar to DC) or as repeating pulses (similar to AC).

There maybe super batteries that store a dense cloud of electron gas in vacuum (or as a BEC?)?

Stun guns:

Imagine a super powerful gun creates conductive paths in air using UV pulse/continuous lasers, momentarily.

It sends a powerful electroshock to the target from those conductive paths.

(I think this tech is already developing currently.)

Teleportation:

Imagine two teleportation machines (chambers).

The sender machine creates some kind of quantum shock wave that instantly destroys the target object into gamma photons that carry the same quantum information.

That information sent to the receiver machine which has a giant BEC (that is made of same kind of atoms/molecules with same proportions as the target object?).

When the information is applied to the BEC (instantly, like a quantum shock wave), it somehow instantly quantum mechanically collapses into an exact copy of the object.

Phasers:

Instantly destroys the target object using similar quantum shock wave that used in teleportation.

(Target object instantly gets destroyed similar to teleportation, but there is no receiver for its quantum information.)

Artificial Gravity:

Imagine if we had small coils that can create high level positive/negative spacetime curvatures around them (spherical/cylindrical).

We could place a grid of those coils under floors etc to create artificial gravity.

Force Fields:

Imagine if we created spherical/cylindrical spaceships that covered by a dense grid of (+/-) gravity coils,

and also a dense grid of (superconductor) coils that can create (+/-) electric/magnetic fields.

Would not be possible to use them to create "force fields" all around the spaceships to deflect any (atom/particle/photon) kind of attack?

Cloaking Fields:

Imagine if we created spherical/cylindrical spaceships that covered by a dense grid of (+/-) gravity coils.

Would not be possible to use them to create a photon deflection field all around the spaceships?

Warp Speed:

Imagine if we created spherical/cylindrical spaceships that covered by a dense grid of (+/-) gravity coils.

Would not be possible to use them to create a warp bubble all around the spaceships to act like an Alcubierre Drive?

Sub-space Communication:

(Since we assume we have ability to manipulate the curvature of spacetime)

Imagine we have tech to create micro worm holes as twins and able to trap them indefinitely.

A communication signal enters to either one and instantly comes out of the other one.

Each time we create a new set of twin micro worm holes, we keep one in a central hub on Earth,

and the other carried by a spaceship or placed on a different planet/moon/space station.

(The same tech could also be useful to create and trap micro Black Holes, which maybe useful as compact batteries.)

Electronic Dampening Field:

Imagine EMP created like a standing wave using a grid of phased array EMP generators.

Spaceships with hulls that can withstand against almost any kind of attacks at least for a while if necessary:

How about metallic hydrogen or another solid material that we created using ultrapressure (and temperature)?

I think it is also clear that Star Trek Physics require devices with ability to create strong positive and negative spacetime curvatures for sure.

How could it work according to laws and limitations of known physics, assuming they are always must be obeyed?

According to General Relativity, spacetime bends in the presence of positive or negative mass/energy(/pressure/acceleration).

What if we destroyed a small amount of matter/antimatter in a spot (as pulses)?

(Could there be an economical way to create as much as antimatter as we need? Think about how we could easily induce a permanent magnet to permanently switch its N and S sides, by momentarily creating a strong enough reverse magnetic field using an electromagnet.

Could there be any way to create a special quantum field/shockwave (using an electric and/or magnetic field generator or a laser?)

that when it passes thru a sample of matter (trapped in mid-vacuum), it induces that matter to instantly switch to antimatter (so that instantly all electrons switch to positrons, all protons to anti-protons, all neutrons to anti-neutrons)?)

What if we created an arbitrarily strong volume/spot of magnetic and/or electric field(s)?

What if we created a spot of ultrapressure using a tech way beyond any diamond anvil?

What if we created a spot of negative ultrapressure (by using pulling force)?

(Imagine if we had or created a (solid?) material that is ultrastrong against pulling force (even for a moment)?)

What if we had or created an ultrastrong (solid?) disk/sphere/ring and trapped it in mid-vacuum.

Later we created an ultrapowerful rotational force on it (even for a moment) using ultrapowerful magnetic field.

So that the object gained (even for a moment) an ultrahigh speed and/or positive/negative acceleration?

Below I will try to provide more plausible ways of realizing similar technologies according to known laws of physics of our Universe.

I do not know if similar explanations were provided by anyone before.

Super Energy Sources:

They could be portable fusion reactors which are almost perfectly efficient.

They could provide continuous power (similar to DC) or as repeating pulses (similar to AC).

There maybe super batteries that store a dense cloud of electron gas in vacuum (or as a BEC?)?

Stun guns:

Imagine a super powerful gun creates conductive paths in air using UV pulse/continuous lasers, momentarily.

It sends a powerful electroshock to the target from those conductive paths.

(I think this tech is already developing currently.)

Teleportation:

Imagine two teleportation machines (chambers).

The sender machine creates some kind of quantum shock wave that instantly destroys the target object into gamma photons that carry the same quantum information.

That information sent to the receiver machine which has a giant BEC (that is made of same kind of atoms/molecules with same proportions as the target object?).

When the information is applied to the BEC (instantly, like a quantum shock wave), it somehow instantly quantum mechanically collapses into an exact copy of the object.

Phasers:

Instantly destroys the target object using similar quantum shock wave that used in teleportation.

(Target object instantly gets destroyed similar to teleportation, but there is no receiver for its quantum information.)

Artificial Gravity:

Imagine if we had small coils that can create high level positive/negative spacetime curvatures around them (spherical/cylindrical).

We could place a grid of those coils under floors etc to create artificial gravity.

Force Fields:

Imagine if we created spherical/cylindrical spaceships that covered by a dense grid of (+/-) gravity coils,

and also a dense grid of (superconductor) coils that can create (+/-) electric/magnetic fields.

Would not be possible to use them to create "force fields" all around the spaceships to deflect any (atom/particle/photon) kind of attack?

Cloaking Fields:

Imagine if we created spherical/cylindrical spaceships that covered by a dense grid of (+/-) gravity coils.

Would not be possible to use them to create a photon deflection field all around the spaceships?

Warp Speed:

Imagine if we created spherical/cylindrical spaceships that covered by a dense grid of (+/-) gravity coils.

Would not be possible to use them to create a warp bubble all around the spaceships to act like an Alcubierre Drive?

Sub-space Communication:

(Since we assume we have ability to manipulate the curvature of spacetime)

Imagine we have tech to create micro worm holes as twins and able to trap them indefinitely.

A communication signal enters to either one and instantly comes out of the other one.

Each time we create a new set of twin micro worm holes, we keep one in a central hub on Earth,

and the other carried by a spaceship or placed on a different planet/moon/space station.

(The same tech could also be useful to create and trap micro Black Holes, which maybe useful as compact batteries.)

Electronic Dampening Field:

Imagine EMP created like a standing wave using a grid of phased array EMP generators.

Spaceships with hulls that can withstand against almost any kind of attacks at least for a while if necessary:

How about metallic hydrogen or another solid material that we created using ultrapressure (and temperature)?

I think it is also clear that Star Trek Physics require devices with ability to create strong positive and negative spacetime curvatures for sure.

How could it work according to laws and limitations of known physics, assuming they are always must be obeyed?

According to General Relativity, spacetime bends in the presence of positive or negative mass/energy(/pressure/acceleration).

What if we destroyed a small amount of matter/antimatter in a spot (as pulses)?

(Could there be an economical way to create as much as antimatter as we need? Think about how we could easily induce a permanent magnet to permanently switch its N and S sides, by momentarily creating a strong enough reverse magnetic field using an electromagnet.

Could there be any way to create a special quantum field/shockwave (using an electric and/or magnetic field generator or a laser?)

that when it passes thru a sample of matter (trapped in mid-vacuum), it induces that matter to instantly switch to antimatter (so that instantly all electrons switch to positrons, all protons to anti-protons, all neutrons to anti-neutrons)?)

What if we created an arbitrarily strong volume/spot of magnetic and/or electric field(s)?

What if we created a spot of ultrapressure using a tech way beyond any diamond anvil?

What if we created a spot of negative ultrapressure (by using pulling force)?

(Imagine if we had or created a (solid?) material that is ultrastrong against pulling force (even for a moment)?)

What if we had or created an ultrastrong (solid?) disk/sphere/ring and trapped it in mid-vacuum.

Later we created an ultrapowerful rotational force on it (even for a moment) using ultrapowerful magnetic field.

So that the object gained (even for a moment) an ultrahigh speed and/or positive/negative acceleration?

## Sunday, July 30, 2017

### 3D VOLUME SCANNER IDEA

I recently learned about an innovative method to get 3D scans of objects. It overcomes line of sight problem and captures the inner shape of the object also. It looks like a robot arm dips the object into water in different orientations. Each time how water level changed over time gets measured and from these measurements 3d object shape is calculated like a CAT scan.

I think these method can be improved upon greatly as follows:

Imagine we put a tight metal wire ring around the object we want to scan, maybe using a separate machine.

It could be a bendable but rigid, steel wire ring, or maybe bicycle wire ring, could be even a suitable kind of plastic.

The object could be in any orientation, hold tight by the ring.

Imagine we have an aquarium tank filled with liquid mercury

(which would keep the object dry unlike water, and also tank walls so that measurements would be more precise).

(Also mercury is conductive which would also make measurements easier using electronic sensor(s).)

(It could also be a cylindrical tank.)

Imagine inside of the tank we have a vertical bar that can move up and down a horizontal bar using electronic control.

Imagine that horizontal bar at its middle (down side) has a hook/lock for the wire ring (around the object).

That hook/lock has an electronically controlled motor that can rotate the wire ring (so the object) to any (vertical) angle.

(To prevent the ring/object moving like a pendulum when it is dipped into liquid (fast) each time, we could add a second horizontal bar with adjustable height, that has a hook/lock for the wire ring at its middle (up side). So the ring would be hold in place from its top and bottom points by two horizontal bars.)

Now imagine to take new measurements each time, we rotate the object a small and equal angular amount (within 360 degrees).

Then we dip the object fully inside the liquid (at constant speed) and take it out fully back (at constant speed).

Every time as we dip the object we record the changes in the liquid level in the tank over time.

(While the object fully dipped we could rotate it again and then record liquid level changes while we take the object fully out back

to get two sets of measurements at each cycle, instead of one.)

Of course mercury is highly toxic and reacts with some metals.

So it would be best to find a better liquid.

The liquid would need to be non-stick to keep scanned objects, tank walls dry. Minimal viscosity and density as possible, maximal temperature range with linear volume change based on temperature, constant volume under common different air pressures would be better. Stable (non-chemically active) and non-toxic are must.

Also electric conductivity would be a plus.

References:

https://www.sciencedaily.com/releases/2017/07/170721131954.htm

http://www.fabbaloo.com/blog/2017/7/25/water-displacement-3d-scanning-will-this-work

https://3dprintingindustry.com/news/3d-scanning-objects-dipping-water-118886/

I think these method can be improved upon greatly as follows:

Imagine we put a tight metal wire ring around the object we want to scan, maybe using a separate machine.

It could be a bendable but rigid, steel wire ring, or maybe bicycle wire ring, could be even a suitable kind of plastic.

The object could be in any orientation, hold tight by the ring.

Imagine we have an aquarium tank filled with liquid mercury

(which would keep the object dry unlike water, and also tank walls so that measurements would be more precise).

(Also mercury is conductive which would also make measurements easier using electronic sensor(s).)

(It could also be a cylindrical tank.)

Imagine inside of the tank we have a vertical bar that can move up and down a horizontal bar using electronic control.

Imagine that horizontal bar at its middle (down side) has a hook/lock for the wire ring (around the object).

That hook/lock has an electronically controlled motor that can rotate the wire ring (so the object) to any (vertical) angle.

(To prevent the ring/object moving like a pendulum when it is dipped into liquid (fast) each time, we could add a second horizontal bar with adjustable height, that has a hook/lock for the wire ring at its middle (up side). So the ring would be hold in place from its top and bottom points by two horizontal bars.)

Now imagine to take new measurements each time, we rotate the object a small and equal angular amount (within 360 degrees).

Then we dip the object fully inside the liquid (at constant speed) and take it out fully back (at constant speed).

Every time as we dip the object we record the changes in the liquid level in the tank over time.

(While the object fully dipped we could rotate it again and then record liquid level changes while we take the object fully out back

to get two sets of measurements at each cycle, instead of one.)

Of course mercury is highly toxic and reacts with some metals.

So it would be best to find a better liquid.

The liquid would need to be non-stick to keep scanned objects, tank walls dry. Minimal viscosity and density as possible, maximal temperature range with linear volume change based on temperature, constant volume under common different air pressures would be better. Stable (non-chemically active) and non-toxic are must.

Also electric conductivity would be a plus.

References:

https://www.sciencedaily.com/releases/2017/07/170721131954.htm

http://www.fabbaloo.com/blog/2017/7/25/water-displacement-3d-scanning-will-this-work

https://3dprintingindustry.com/news/3d-scanning-objects-dipping-water-118886/

## Saturday, July 29, 2017

### A Simple Derivation of General Relativity

According to Einstein's equivalence principle, a person accelerating upwards in an elevator (in outer space with no gravity) cannot distinguish it from gravity (downwards). Then acceleration and gravity are physically equivalent.

Assume a (laser) light send horizontally from one side (wall) of the elevator to other side (wall).

What is the Y coordinate of the beam for given X or T, if upwards constant speed of elevator is V?

x=c*t (assuming x is positive towards right)

y=v*t (assuming y is positive downwards)

m=y/x=(v*t)/(c*t)=v/c

Applying parametric to implicit conversion:

x=c*t => t=x/c => y=v*(x/c)=(v/c)*x=m*x => line with tangent m

What is the Y coordinate of the beam for given X or T, if upwards constant acceleration of elevator is A?

x=c*t (assuming x is positive towards right)

y=a*t^2 (assuming y is positive downwards)

Applying parametric to implicit conversion:

x=c*t => t=x/c => y=a*(x/c)^2=(a/c^2)*x^2 (parabola)

Geometry says:

if a parabola is y=x^2/(4*f) => f: focal length

The focal length of a parabola is half of its radius of curvature at its vertex => f=r/2

The radius of curvature is the reciprocal of the curvature (curvature of circle: 1/r)

Then:

y=(a/c^2)*x^2=x^2/(4*f) => a/c^2=1/(4*f) => 4*f*a/c^2=1 => f=c^2/(4*a)

r=2*f=c^2/(2*a) => curvature=1/r=1/(c^2/(2*a))=(1/1)/(c^2/(2*a))=(1/1)*((2*a)/c^2)=(2*a)/c^2

Newton's laws say: F=G*M*m/d^2 and F=m*a => Acceleration for unit mass in gravitational field of mass m:

a=F/m=F/1=G*M*1/d^2=G*M/d^2

Then:

curvature=(2*a)/c^2=(2*G*M/d^2)/c^2=2*G*M/c^2/d^2

Is this formula to calculate spacetime curvature correct (using mass of the object (star, planet etc) and distance from its gravitational center)? I have no idea. I searched online to find a similar formula to compare but could not found it.

If the formula is wrong I would like to know its correct expression (using same input variables M and d) of course. And also then, if it is possible to derive that formula from the same thought experiment.

Assume a (laser) light send horizontally from one side (wall) of the elevator to other side (wall).

What is the Y coordinate of the beam for given X or T, if upwards constant speed of elevator is V?

x=c*t (assuming x is positive towards right)

y=v*t (assuming y is positive downwards)

m=y/x=(v*t)/(c*t)=v/c

Applying parametric to implicit conversion:

x=c*t => t=x/c => y=v*(x/c)=(v/c)*x=m*x => line with tangent m

What is the Y coordinate of the beam for given X or T, if upwards constant acceleration of elevator is A?

x=c*t (assuming x is positive towards right)

y=a*t^2 (assuming y is positive downwards)

Applying parametric to implicit conversion:

x=c*t => t=x/c => y=a*(x/c)^2=(a/c^2)*x^2 (parabola)

Geometry says:

if a parabola is y=x^2/(4*f) => f: focal length

The focal length of a parabola is half of its radius of curvature at its vertex => f=r/2

The radius of curvature is the reciprocal of the curvature (curvature of circle: 1/r)

Then:

y=(a/c^2)*x^2=x^2/(4*f) => a/c^2=1/(4*f) => 4*f*a/c^2=1 => f=c^2/(4*a)

r=2*f=c^2/(2*a) => curvature=1/r=1/(c^2/(2*a))=(1/1)/(c^2/(2*a))=(1/1)*((2*a)/c^2)=(2*a)/c^2

Newton's laws say: F=G*M*m/d^2 and F=m*a => Acceleration for unit mass in gravitational field of mass m:

a=F/m=F/1=G*M*1/d^2=G*M/d^2

Then:

curvature=(2*a)/c^2=(2*G*M/d^2)/c^2=2*G*M/c^2/d^2

Is this formula to calculate spacetime curvature correct (using mass of the object (star, planet etc) and distance from its gravitational center)? I have no idea. I searched online to find a similar formula to compare but could not found it.

If the formula is wrong I would like to know its correct expression (using same input variables M and d) of course. And also then, if it is possible to derive that formula from the same thought experiment.

## Monday, July 17, 2017

### What Is Spacetime?

First assume there is an ND uniform matrix (like a crystal) cellular automata quantum computer (UCAQC) where each of its cells are Planck length size and made of M qubits (like a register (set)).

Assume our universe is a bubble/ball of information (energy) expanding in that matrix.

Assume time step of UCAQC is Planck time (which leads to speed of light being the ultimate speed).

Assume each particle of Standard Model is a ball/cluster/packet of information moving around.

Assume when two (or more) particles collide, they temporarily create a combined information (energy) ball that is unstable because (for some reason) only the particles of Standard Model is allowed, so the newly created unstable particle is forced to decay/divide into a set of particles allowed by Standard Model.

Naturally existence of a Newtonian spacetime is easy to explain for such a universe.

(Also realize it is naturally compatible with quantum mechanics.)

But how about Relativity?

I think Special Relativity is because flow of information about events is limited by speed of light for all observers.

A thought experiment:

Imagine we have a spaceship in Earth's orbit that sends blue laser to a receiver on the ground.

Imagine the spaceship starts moving away from Earth with its speed keep increasing towards speed of light.

Imagine it reaches a speed so that its laser light looks red to us and to our measurement instruments.

(Because of Special Relativity.)

Realize that an observer on the spaceship would still see blue laser photons leaving the device.

But an observer on the ground sees and measures red laser photons.

The question is, are the laser photons actually lost energy?

Are they really blue (higher energy) or red (lower energy) photons?

Cannot we say they are actually blue photons, same as when they were created, but we see/detect them as red photons because of our relative (observer) motion.

What is really happening is same as how Doppler Effect changes frequency of sounds.

Different observers see photons with different energies because density of information flow is different for each observer,

even though speed of information flow is the same (speed of light) for all observers.

That is why I do not think expansion of the universe actually cause photons to lose energy.

I think all photons stay the same as when they were created, but they can be perceived with different energies by different observers.

(So when we measure energy of a photon, we actually measure its information density; not its total information (which is constant and equal for all photons).)

Similarly, I think (positive) spacetime curvature around objects with mass, compresses Compton wavelength of all particles present.

In case of Black Holes, Compton wavelength of a particle gets compressed as it approaches the event horizon.

Upon reaching the event horizon, the wavelength drops to Planck length and you get Planck particles (which is I think what Black Holes are made of).

Assume our universe is a bubble/ball of information (energy) expanding in that matrix.

Assume time step of UCAQC is Planck time (which leads to speed of light being the ultimate speed).

Assume each particle of Standard Model is a ball/cluster/packet of information moving around.

Assume when two (or more) particles collide, they temporarily create a combined information (energy) ball that is unstable because (for some reason) only the particles of Standard Model is allowed, so the newly created unstable particle is forced to decay/divide into a set of particles allowed by Standard Model.

Naturally existence of a Newtonian spacetime is easy to explain for such a universe.

(Also realize it is naturally compatible with quantum mechanics.)

But how about Relativity?

I think Special Relativity is because flow of information about events is limited by speed of light for all observers.

A thought experiment:

Imagine we have a spaceship in Earth's orbit that sends blue laser to a receiver on the ground.

Imagine the spaceship starts moving away from Earth with its speed keep increasing towards speed of light.

Imagine it reaches a speed so that its laser light looks red to us and to our measurement instruments.

(Because of Special Relativity.)

Realize that an observer on the spaceship would still see blue laser photons leaving the device.

But an observer on the ground sees and measures red laser photons.

The question is, are the laser photons actually lost energy?

Are they really blue (higher energy) or red (lower energy) photons?

Cannot we say they are actually blue photons, same as when they were created, but we see/detect them as red photons because of our relative (observer) motion.

What is really happening is same as how Doppler Effect changes frequency of sounds.

Different observers see photons with different energies because density of information flow is different for each observer,

even though speed of information flow is the same (speed of light) for all observers.

That is why I do not think expansion of the universe actually cause photons to lose energy.

I think all photons stay the same as when they were created, but they can be perceived with different energies by different observers.

(So when we measure energy of a photon, we actually measure its information density; not its total information (which is constant and equal for all photons).)

Similarly, I think (positive) spacetime curvature around objects with mass, compresses Compton wavelength of all particles present.

In case of Black Holes, Compton wavelength of a particle gets compressed as it approaches the event horizon.

Upon reaching the event horizon, the wavelength drops to Planck length and you get Planck particles (which is I think what Black Holes are made of).

## Friday, July 14, 2017

### Universal Cellular Automata Quantum Computer

If Universe is a qubit-based CA quantum computer operating in Planck scale, how it can explain QM and Relativity?

Human mind operating like a quantum computer (software) can explain Observer Effect:

Because of quantum information exchanges between qubits of experiment and qubits of mind of observer(s), like operations in a quantum computer.

Particles of the Standard Model (6 quarks + 6 leptons + 4 gauge bosons + 1 Higgs boson) + Planck particle can be explained as clusters (spherical?) of information.

(Then using the list of quantum properties common to all particles (like energy, mass, charge, spin, ?), it maybe possible to determine how many qubits (at least) for each (Planck size) cell of the universe CA quantum computer.)

(How particle interactions can be explained?)

It can also explain Relativity because speed of light limit is because of (constant) speed of information transmission of the Universe CA quantum computer.

So each observer can receive information only at speed of light (constant). Like non-moving and moving observers watching the same events would disagree on how fast events unfolding, because each can receive information (light) generated from the events in same speed but with different information flow density (frequency).

Gravity can be explained as an entropic force.

The Big Bang can be explained as, initially creating a ball of (maximum) dense information (energy) in the center of the Universal (CA) Quantum Computer (UCAQC).

Imagine there is a tendency of information flow from more dense to less dense volumes of UCAQC, and it causes the expansion of the universe.

I think in the beginning times of the Big Bang, this expansion force should be at its most powerful but later it would drop.

It could be that:

F = U * (1 - V / W)

Where:

F: Expansion Force at time t after Big Bang

U: an unknown constant

V: Volume of Universe Information Ball at time t after Big Bang

W: Max Possible Volume of Universe Information Ball (at time infinity after Big Bang)

Or maybe the expansion force /speed could be depending on the current (uniform) curvature of V.

(I had explained how to calculate universal (uniform) curvature in one of my previous blog posts.)

(But either case, it would mean there is really no such thing as Dark Energy and neither a universal field of inflation.)

Human mind operating like a quantum computer (software) can explain Observer Effect:

Because of quantum information exchanges between qubits of experiment and qubits of mind of observer(s), like operations in a quantum computer.

Particles of the Standard Model (6 quarks + 6 leptons + 4 gauge bosons + 1 Higgs boson) + Planck particle can be explained as clusters (spherical?) of information.

(Then using the list of quantum properties common to all particles (like energy, mass, charge, spin, ?), it maybe possible to determine how many qubits (at least) for each (Planck size) cell of the universe CA quantum computer.)

(How particle interactions can be explained?)

It can also explain Relativity because speed of light limit is because of (constant) speed of information transmission of the Universe CA quantum computer.

So each observer can receive information only at speed of light (constant). Like non-moving and moving observers watching the same events would disagree on how fast events unfolding, because each can receive information (light) generated from the events in same speed but with different information flow density (frequency).

Gravity can be explained as an entropic force.

The Big Bang can be explained as, initially creating a ball of (maximum) dense information (energy) in the center of the Universal (CA) Quantum Computer (UCAQC).

Imagine there is a tendency of information flow from more dense to less dense volumes of UCAQC, and it causes the expansion of the universe.

I think in the beginning times of the Big Bang, this expansion force should be at its most powerful but later it would drop.

It could be that:

F = U * (1 - V / W)

Where:

F: Expansion Force at time t after Big Bang

U: an unknown constant

V: Volume of Universe Information Ball at time t after Big Bang

W: Max Possible Volume of Universe Information Ball (at time infinity after Big Bang)

Or maybe the expansion force /speed could be depending on the current (uniform) curvature of V.

(I had explained how to calculate universal (uniform) curvature in one of my previous blog posts.)

(But either case, it would mean there is really no such thing as Dark Energy and neither a universal field of inflation.)

### What Black Holes Are Made Of 2

I know that many physicists believe today that infinities are a sign of breaking of a physical theory. I think the same. So I don’t think BHs have a singularity in their center so that means they must be made of some kind of particle. And I think there is only one particle that fits the bill (and it does perfectly). It is a hypothetical particle called Planck particle. Its Wikipedia page was saying it already naturally shows up in physical equations/calculations.

Also I think BHs must be in some kind of fluid state, similar to Neutron stars.

For example, I remember reading complex numbers were showing up in solutions of polynomial equations long before they discovered.

Extreme speculation mode:

I know complex numbers are extremely useful in physics.

It could be said complex numbers are more powerful by being 2D, instead of 1D.

I think if universe is some kind of cellular automata (computer) operating in Planck scale,

it is quite possible its calculations done using quaternions(4D), octonions(8D), maybe even sedenions (16D).

Also if there are singularities in the centers of BHs, how it is possible singularities (objects of zero size) can be different from each other to create different sizes of BHs around them?

Or should be really accept properties like mass/energy is just an absolutely abstract number so that an object of zero size can contain them (just as pure information) no problem?

In case what I mean is unclear:

Your viewpoint is yes the theory does not apply in the center of BH but it still applies all around. (Or is it, the theory also apply in the center that is why we must accept the existence of a real singularity?)

But my viewpoint is that the theory breaks in the center and that means. what we think about the structure of BHs must be wrong completely. (Like trying to build a skyscraper on a really bad foundation.)

“What force stops your hypothetical high density ball collapsing into a singularity?”

That is exactly why I was suggesting BHs must be made of Planck particles.

From Wikipedia about Planck Particle:

“its Compton wavelength and Schwarzschild radius are about the Planck length”

Planck particles are smallest possible particles. Imagine if any particle is compressed in an unstoppable way, its Compton wavelength gets smaller and smaller and finally it is reduced Planck length, where it cannot get any smaller.

I think BHs being made of Planck particles is theoretically possible and it does not lead to any contradictions with neither Quantum Mechanics nor Relativity.

But I am not a physicist and I would like to see Ethan writing a post evaluating this idea. if possible.

(I had posted these above comments about a week ago here:

http://scienceblogs.com/startswithabang/2017/07/07/is-it-possible-to-pull-something-out-of-a-black-hole-synopsis/)

Also I think BHs must be in some kind of fluid state, similar to Neutron stars.

For example, I remember reading complex numbers were showing up in solutions of polynomial equations long before they discovered.

Extreme speculation mode:

I know complex numbers are extremely useful in physics.

It could be said complex numbers are more powerful by being 2D, instead of 1D.

I think if universe is some kind of cellular automata (computer) operating in Planck scale,

it is quite possible its calculations done using quaternions(4D), octonions(8D), maybe even sedenions (16D).

Also if there are singularities in the centers of BHs, how it is possible singularities (objects of zero size) can be different from each other to create different sizes of BHs around them?

Or should be really accept properties like mass/energy is just an absolutely abstract number so that an object of zero size can contain them (just as pure information) no problem?

In case what I mean is unclear:

Your viewpoint is yes the theory does not apply in the center of BH but it still applies all around. (Or is it, the theory also apply in the center that is why we must accept the existence of a real singularity?)

But my viewpoint is that the theory breaks in the center and that means. what we think about the structure of BHs must be wrong completely. (Like trying to build a skyscraper on a really bad foundation.)

“What force stops your hypothetical high density ball collapsing into a singularity?”

That is exactly why I was suggesting BHs must be made of Planck particles.

From Wikipedia about Planck Particle:

“its Compton wavelength and Schwarzschild radius are about the Planck length”

Planck particles are smallest possible particles. Imagine if any particle is compressed in an unstoppable way, its Compton wavelength gets smaller and smaller and finally it is reduced Planck length, where it cannot get any smaller.

I think BHs being made of Planck particles is theoretically possible and it does not lead to any contradictions with neither Quantum Mechanics nor Relativity.

But I am not a physicist and I would like to see Ethan writing a post evaluating this idea. if possible.

(I had posted these above comments about a week ago here:

http://scienceblogs.com/startswithabang/2017/07/07/is-it-possible-to-pull-something-out-of-a-black-hole-synopsis/)

### Ideas For Long Term Future Of Humanity

Can we move Mars (which is too cold today) closer to Sun to make is more hospitable for human life?

Could we slowdown orbital speed of Mars (to get it closer to Sun), by slowly changing orbits of selected asteroids(and comets?) to make them collide with Mars in a controlled way? (And if we continue doing that for hundreds of years.)

Colliding asteroids with Mars would also increase its mass, which is a good side effect because Mars is significantly smaller than Earth.

Colliding comets is even better because they would increase (surface) water content of Mars.

Since we know that Sun will get gradually hotter and bigger as it aged, here is an utterly insane long term plan to ensure distant future of humanity, assuming we will have the power to modify orbits of asteroids and comets such that we can make any of them collide with any planet in a controlled way so that we can increase or decrease size of the orbit of the planet (we would be also keep increasing the mass of the planet we bombard; also we could use available comets to provide extra water to target planet):

Imagine first we could bombard Mars until its climate and water content is good enough for humanity.

Then move humanity to Mars (or as much as we can),

then we could bombard Earth to increase the size of its orbit as much as we want/need.

And afterwards as the Sun keeps getting bigger/hotter,

we could keep moving humanity back and forth between Earth and Mars, and each time after we moved humanity to one of the planets, we could bombard the other planet to increase the size of its orbit as much as we want/need.

Potential problems would be, can we keep the orbits of all other planets still stable for long term,

and what are the limits of keep increasing the mass (and water) content of a planet we want to live in?

Also after how many times we moved humanity, we would run out of asteroids to use (and only can use comets)?

Could we still continue by using comets?

If so when we would run out of comets?

And if we also run out of comets, what would be the final mass (and water content) of Earth/Mars?

What would be the size of the orbit of Earth/Mars, and would there be any chance of moving humanity to any planet in any nearby star?

(Because I think if the size of the orbit is big enough, it could make it possible to come close to a suitable planet in a nearby star. Keep in mind we would prefer to save all humanity if possible.)

Another potential problem is, even if we added lots of water to Mars, how we would get a suitable atmosphere?

Assuming we have no electrical power production problem, maybe we could separate lots of water to oxygen and hydrogen gas, and release hydrogen gas to space.

But then can we live in a almost pure oxygen atmosphere?

Are the common rocks on Mars have enough nitrogen we could release to atmosphere?

Or is there any other suitable inert gas we could produce enough from the rocks?

But also how we could modify the orbits of almost any asteroid or comet?

I don’t think any kind of rocket fuel would be enough.

But assuming we can produce portable fusion power generators that can generate maybe something like megawatts for decades, it maybe possible to produce enough thrust in space using only electrical power in different ways.

One way I was thinking could be to create giant rotating electrical and/or magnetic fields around a spacecraft to swim thru sea of cosmic rays around (positive and negative charged particles) like a submarine.

Of course if we had technology to easily modify asteroid and comet orbits, it would also be useful for protecting humanity from any unwanted asteroid or comet impacts anywhere.

Also can any space habitat would be viable way to live for humanity indefinitely?

Wouldn’t it keep getting damaged by cosmic rays?

Could we always repair and protect it?

How about another crazy idea:

Could we build lots of giant towers on Earth where their tops are above the atmosphere of Earth?

If so and we also have tech to create efficient and powerful pure electric drives for space, maybe we could turn Earth itself to a mobile planet.

This maybe the craziest idea:

What if we turn Earth to a mobile planet and also bombard Mars with asteroids and comets and get it closer to Earth and also make Mars have similar amounts of water and oxygen atmosphere, and later also turn Mars to a mobile planet?

Then we would have two mobile planets to live and move anywhere, maybe even to nearby stars.

Then maybe we could keep creating more mobile planets everywhere we go in the universe.

(I had posted these above comments here about two weeks ago:

http://scienceblogs.com/startswithabang/2017/07/01/ask-ethan-could-we-save-the-earth-by-migrating-it-away-from-the-sun-synopsis)

## Tuesday, July 4, 2017

### Pascal's Wager

My reasoning below is completely hypothetical.

I wanted to try to define an objective approach.

I am not claiming these are the steps I ever followed myself, either.

Pascal's Wager implies that we should give serious consideration to question of whether or not believe in (any?) God(s).

Because the potential loss or gain could be infinite.

If we chose not to believe then I think there is nothing further to consider. Because we have our answer.

But let's say if we chose to believe then what? Which God(s) we should believe?

Then the question becomes what world religion(s) we should choose, isn't it? Because I think it is obvious is that not all religions are compatible with each other. So there is no way we could choose to believe all of them together to cover all available options.

Is there really any way to objectively compare all world religions to make a decision about which one to believe? How we could compare any two religions objectively?

I think first thing to do would be to gather available information about all world religions in a common comparable format. For that we could create a standard list of questions for all religions.

For each religion we could list:

Which God(s) we should believe and what are their powers and properties (like shape, size, age etc).

Do those God(s) want us to believe in them and offer rewards/punishments (finite/infinite)?

Are those God(s) would treat us with justice? Are they good?

What are their explanations for existence of universe and its creation; how universe works; why it was created?

Why and how humanity was created?

What are the descriptions for afterlife, life in hell, life in heaven?

Are there any serious logical inconsistencies or absolute physical impossibilities in their explanations/beliefs/claims?

How each religion sees all others (like, also okay to believe (now), was okay to believe in the past but not today, never okay to believe)?

How we should live our life (any kind of sacrifices needed?)

So after we collected info about all religions in an objectively comparable way,

would that be enough for each one of us to make a choice?

Assume each person on Earth examined our comparable information about all religions,

and somehow each and everyone completely understood the information,

and also agreed it is all completely objective statements about each and every religion.

I wonder what percentage of people would choose which religion and what their reasoning(s) would be for their choices.

That's it? Cannot we even try to choose a religion absolutely objectively?

I think for that we could try to approach the problem mathematically, like in game theory or in probability theory.

But still, can really any method of calculation (algorithm) provide a clear and objective answer without requiring any subjective input values?

I think the answer maybe no.

I wanted to try to define an objective approach.

I am not claiming these are the steps I ever followed myself, either.

Pascal's Wager implies that we should give serious consideration to question of whether or not believe in (any?) God(s).

Because the potential loss or gain could be infinite.

If we chose not to believe then I think there is nothing further to consider. Because we have our answer.

But let's say if we chose to believe then what? Which God(s) we should believe?

Then the question becomes what world religion(s) we should choose, isn't it? Because I think it is obvious is that not all religions are compatible with each other. So there is no way we could choose to believe all of them together to cover all available options.

Is there really any way to objectively compare all world religions to make a decision about which one to believe? How we could compare any two religions objectively?

I think first thing to do would be to gather available information about all world religions in a common comparable format. For that we could create a standard list of questions for all religions.

For each religion we could list:

Which God(s) we should believe and what are their powers and properties (like shape, size, age etc).

Do those God(s) want us to believe in them and offer rewards/punishments (finite/infinite)?

Are those God(s) would treat us with justice? Are they good?

What are their explanations for existence of universe and its creation; how universe works; why it was created?

Why and how humanity was created?

What are the descriptions for afterlife, life in hell, life in heaven?

Are there any serious logical inconsistencies or absolute physical impossibilities in their explanations/beliefs/claims?

How each religion sees all others (like, also okay to believe (now), was okay to believe in the past but not today, never okay to believe)?

How we should live our life (any kind of sacrifices needed?)

So after we collected info about all religions in an objectively comparable way,

would that be enough for each one of us to make a choice?

Assume each person on Earth examined our comparable information about all religions,

and somehow each and everyone completely understood the information,

and also agreed it is all completely objective statements about each and every religion.

I wonder what percentage of people would choose which religion and what their reasoning(s) would be for their choices.

That's it? Cannot we even try to choose a religion absolutely objectively?

I think for that we could try to approach the problem mathematically, like in game theory or in probability theory.

But still, can really any method of calculation (algorithm) provide a clear and objective answer without requiring any subjective input values?

I think the answer maybe no.

## Monday, July 3, 2017

### Solution of P versus NP Problem

I know that many people attempted to prove an answer for P versus NP problem.

Also know that it is one of the Seven Millennium Prize problems.

Here is my idea for a proof (possibly with some missing pieces):

Since quantum computers are theoretically capable of infinite calculations per time step (where min could be Planck time),

then we can say quantum computers are definitely more powerful than any equivalent regular computer.

But also there maybe some calculation algorithms where quantum computer cannot provide an answer any faster than a regular computer.

Then we could at least say for sure that:

computingpower(regularcomputer) <= computingpower(quantumcomputer)

If so then if we can prove that, no matter what calculation algorithm is used, a quantum computer cannot solve any of known NP-complete problems in polynomial time, then it would mean that solving any NP-complete problem in polynomial time requires a computer more powerful than any quantum computer.

And that would mean the answer for P versus NP problem is P < NP.

(Keep in mind we already know that, solving any one of NP-complete problems means solving all of them, because each of those problems can be translated to any other in polynomial time.)

(And as for what kind of computer can be more powerful than any quantum computer, keep in mind since a quantum computer is capable of infinite number of calculations at each time step, I think the only kind of computer, which would be more powerful, would be a computer that is capable of infinite number of calculations in zero time step. (Not even one time step, because remember quantum computer already can do that.)

So if solving NP-complete problems in polynimial time requires that kind of computer then the answer is P < NP, again.

I think the only crucial part of this proof is whether we can prove that, quantum computers are incapable of solving NP-complete problems in polynomial time, no matter what algorithm steps are used.

Since all are equivalent, how about we chose Travelling Salesman Problem (TSP) to solve using a quantum computer?

Assume we represented the input graph structure for the problem in the quantum computer in any way we want, like Adjacency list/matrix for example.

Then we could encode that input state using N registers with each register has M qubits.

And for representing the solution output we could use P registers with each has Q qubits.

And we want to set input registers as any given TSP input state and get the answer at least in polynomial number of time steps.

Realize that those polynomial number of time steps can be used following any algorithm we want.

If we look at how a quantum computer allows us to solve integer factoring problem, assuming my understanding is correct, our inputs are two quantum registers with unknown values.

Then we apply multiplication calculation steps and get an output of unknown value in a third quantum register.

(So unknown inputs and unknown calculation output.)

But quantum mechanics allows us to force output register to any certain value and get the input register values for certain, or the reverse, where we can force input registers to certain known values and get the output value for certain.

But realize that for TSP, if we force the output register(s) to the solution for the given input values, (assuming we know the solution already), then input register values cannot be already determined,

because the same (optimal) route can be the optimal solution for many different input register states.

So there is a one-to-many relationship between (optimal) solution state to possible input states.

So if we force a solution state to the output registers then input register values must be indeterminate.

That means quantum computer cannot solve TSP in reverse (unlike Integer Factorization Problem).

And I think this failure in one direction clearly says TSP is a harder problem than Integer Factorization Problem.

Also realize that the reason we can solve Integer Factorization Problem very fast using a quantum computer is because there is entanglement between input and output quantum registers.

(So we can force either input or output registers to any certain values we want and get the certain (an unique) answer for the other side.)

Entanglement requires one-to-one relationship between two sides (input/output or problem/answer) to work and it is a symmetric rule of quantum mechanics.

But also realize that we established above that TSP cannot possibly be solved in both directions using a quantum computer, and using any polynomial time algorithm steps. (Because output to input direction solution is not possible for sure.)

But we also know entanglement needs to work in both directions because of being a symmetric law of nature.

So I think these mean a quantum computer cannot solve TSP in polynomial time no matter what algorithm is used.

And that means solving TSP in polynomial time requires a computer more powerful than any quantum computer.

And that means P < NP.

Is there anything missing in this proof? (Since obviously I cannot see anything wrong with it myself.)

I think we established that no quantum algorithm (that uses entanglement) can solve TSP in polynomial time.

But what about possibility of a classical (non-quantum; no entanglement) algorithm solving it in polynomial time?

Then we need to ask if any classical algorithm can be converted to a quantum algorithm?

Is it really possible to have a classical algorithm that cannot be converted to any quantum algorithm?

Because since there is no quantum algorithm for TSP (that always runs in polynomial time), that means if there is any classical algorithm for TSP (that always runs in polynomial time), it should be impossible to convert that algorithm to a quantum algorithm.

I do not think existence of such algorithms is possible but also I do not have any proof for this claim and I do not know if such proof already exists or not.

Also realize that (assuming what is above is true), if we want an encryption algorithm that cannot be broken by any quantum computer,

that means it needs to be based on an NP problem like TSP, instead of a problem like Integer Factorization.

In above argument we assumed that a quantum computer cannot solve TSP (in polynomial time steps), because if we have the output (solution) we cannot use it to get the input problem state for it

because the input state may not be unique so the input register qubits could not know what bit states to choose. (And so they would stay indeterminate.) But what if that assumption is wrong? What if the input state registers would be set to a solution picked from all possible solutions for that certain output state (with equal probability for each)? Is not that mean it maybe possible to have a polynomial time solution in both directions (input to output and output to input)? I think the total number of possible (valid) input states for a certain given output state would be very large often.

Realize that in TSP problem what we really have is a certain input state and we want to find the optimal solution (output state) for it. If we have a candidate output state and we want to see if that is the optimal solution for our certain input state, and each time we try (set output register(s) to the candidate output state) we get a randomly picked possible valid input state, then we may need to try that so many times until we get the input state matching to what we were trying to solve for. So I do not think we could have a polynomial time solution. Which I think means quantum computer still should be considered unable to solve the problem in both ways.

Also know that it is one of the Seven Millennium Prize problems.

Here is my idea for a proof (possibly with some missing pieces):

Since quantum computers are theoretically capable of infinite calculations per time step (where min could be Planck time),

then we can say quantum computers are definitely more powerful than any equivalent regular computer.

But also there maybe some calculation algorithms where quantum computer cannot provide an answer any faster than a regular computer.

Then we could at least say for sure that:

computingpower(regularcomputer) <= computingpower(quantumcomputer)

If so then if we can prove that, no matter what calculation algorithm is used, a quantum computer cannot solve any of known NP-complete problems in polynomial time, then it would mean that solving any NP-complete problem in polynomial time requires a computer more powerful than any quantum computer.

And that would mean the answer for P versus NP problem is P < NP.

(Keep in mind we already know that, solving any one of NP-complete problems means solving all of them, because each of those problems can be translated to any other in polynomial time.)

(And as for what kind of computer can be more powerful than any quantum computer, keep in mind since a quantum computer is capable of infinite number of calculations at each time step, I think the only kind of computer, which would be more powerful, would be a computer that is capable of infinite number of calculations in zero time step. (Not even one time step, because remember quantum computer already can do that.)

So if solving NP-complete problems in polynimial time requires that kind of computer then the answer is P < NP, again.

I think the only crucial part of this proof is whether we can prove that, quantum computers are incapable of solving NP-complete problems in polynomial time, no matter what algorithm steps are used.

Since all are equivalent, how about we chose Travelling Salesman Problem (TSP) to solve using a quantum computer?

Assume we represented the input graph structure for the problem in the quantum computer in any way we want, like Adjacency list/matrix for example.

Then we could encode that input state using N registers with each register has M qubits.

And for representing the solution output we could use P registers with each has Q qubits.

And we want to set input registers as any given TSP input state and get the answer at least in polynomial number of time steps.

Realize that those polynomial number of time steps can be used following any algorithm we want.

If we look at how a quantum computer allows us to solve integer factoring problem, assuming my understanding is correct, our inputs are two quantum registers with unknown values.

Then we apply multiplication calculation steps and get an output of unknown value in a third quantum register.

(So unknown inputs and unknown calculation output.)

But quantum mechanics allows us to force output register to any certain value and get the input register values for certain, or the reverse, where we can force input registers to certain known values and get the output value for certain.

But realize that for TSP, if we force the output register(s) to the solution for the given input values, (assuming we know the solution already), then input register values cannot be already determined,

because the same (optimal) route can be the optimal solution for many different input register states.

So there is a one-to-many relationship between (optimal) solution state to possible input states.

So if we force a solution state to the output registers then input register values must be indeterminate.

That means quantum computer cannot solve TSP in reverse (unlike Integer Factorization Problem).

And I think this failure in one direction clearly says TSP is a harder problem than Integer Factorization Problem.

Also realize that the reason we can solve Integer Factorization Problem very fast using a quantum computer is because there is entanglement between input and output quantum registers.

(So we can force either input or output registers to any certain values we want and get the certain (an unique) answer for the other side.)

Entanglement requires one-to-one relationship between two sides (input/output or problem/answer) to work and it is a symmetric rule of quantum mechanics.

But also realize that we established above that TSP cannot possibly be solved in both directions using a quantum computer, and using any polynomial time algorithm steps. (Because output to input direction solution is not possible for sure.)

But we also know entanglement needs to work in both directions because of being a symmetric law of nature.

So I think these mean a quantum computer cannot solve TSP in polynomial time no matter what algorithm is used.

And that means solving TSP in polynomial time requires a computer more powerful than any quantum computer.

And that means P < NP.

Is there anything missing in this proof? (Since obviously I cannot see anything wrong with it myself.)

I think we established that no quantum algorithm (that uses entanglement) can solve TSP in polynomial time.

But what about possibility of a classical (non-quantum; no entanglement) algorithm solving it in polynomial time?

Then we need to ask if any classical algorithm can be converted to a quantum algorithm?

Is it really possible to have a classical algorithm that cannot be converted to any quantum algorithm?

Because since there is no quantum algorithm for TSP (that always runs in polynomial time), that means if there is any classical algorithm for TSP (that always runs in polynomial time), it should be impossible to convert that algorithm to a quantum algorithm.

I do not think existence of such algorithms is possible but also I do not have any proof for this claim and I do not know if such proof already exists or not.

Also realize that (assuming what is above is true), if we want an encryption algorithm that cannot be broken by any quantum computer,

that means it needs to be based on an NP problem like TSP, instead of a problem like Integer Factorization.

In above argument we assumed that a quantum computer cannot solve TSP (in polynomial time steps), because if we have the output (solution) we cannot use it to get the input problem state for it

because the input state may not be unique so the input register qubits could not know what bit states to choose. (And so they would stay indeterminate.) But what if that assumption is wrong? What if the input state registers would be set to a solution picked from all possible solutions for that certain output state (with equal probability for each)? Is not that mean it maybe possible to have a polynomial time solution in both directions (input to output and output to input)? I think the total number of possible (valid) input states for a certain given output state would be very large often.

Realize that in TSP problem what we really have is a certain input state and we want to find the optimal solution (output state) for it. If we have a candidate output state and we want to see if that is the optimal solution for our certain input state, and each time we try (set output register(s) to the candidate output state) we get a randomly picked possible valid input state, then we may need to try that so many times until we get the input state matching to what we were trying to solve for. So I do not think we could have a polynomial time solution. Which I think means quantum computer still should be considered unable to solve the problem in both ways.

### Quantum Computers and the Universe

I had read a lot about quantum computers for many years but never really understood how they actually work.

Even though I am someone who was interested in computers, science, technology since his early teenager years.

Now I think if I cannot understand how exactly quantum computers work, why not try to guess by myself.

(I have a computer science undergraduate degree from a US university with minor in math.)

What made quantum computers so popular was the arrival of the internet.

Its what is called "killer application" is what is called Shor's Algoritm (which I never understood either).

I think practically all encrypted private communication in the internet uses (RSA) Public-Key cryptography protocol.

What makes it almost unbreakable is that the mathematical fact that multiplying 2 very large positive integers can be done very fast,

but on the other hand doing the reverse (which is how to find those 2 multiplied (prime) numbers if we were given the multiplication result) is very hard (using normal computers and all algorithms we know).

Even though I never understood how quantum computers work, I think I understood the "magical" power of qubits always.

Unlike a regular computer bit (in its memory, which can be always either 0 or 1 as we set it, any time using computer processing instructions), a qubit is able to stay indeterminite between 0 and 1 states (as longs as we want?), until we query its value and get an anwer as 0 or 1.

Now assume we want to break RSA, I think our (main) problem is, we have an N digit binary positive integer which we know it was calculated by multiplying 2 very large (average N/2 digits each?) binary positive (both prime) integers.

(They were chosen to be prime (or to be prime with very high probability) I think because that makes the problem the hardest to solve.)

So the question is how Shor's Algorithm could be breaking the code?

I think similar to regular computers, quantum computers must have a set of possible instructions to write programs and do calculations.

I am guessing Shor's Algorithm maybe be getting executed using a quantum computer like this:

Asume we have enough number of qubits to use.

Assume we started with 2 (each N/2 digits) qubit based computer processor registers.

Assume we start with setting those two registers to all undetermined qubit states.

Then assume we run a single multiplication instruction to multiply values of those two registers and store the result in a third register.

(I think in regular computer processors multiplication is actually done in multiple calculation steps internally but the same sequence is triggered using a single instruction each time.)

When the multiplication is done and the result is stored, how do we get the values of the two input numbers?

I am guessing that a quantum computer must have an instruction for loading known bit values into its qubit registers.

But it must also provide a way to load bit values without actually erasing previous (and undetermined) bit values of each qubit in a register.

So what we are really talking about is, forcing previous undetermined value of each qubit to collapse into a 0 or 1 we choose.

But if we can do that, can't we use that for FTL communication using entanglement?

Because if have 2 entangled qubits (meaning if we make a measurement on any of them anytime anywhere and find its value of 0, we would know instantly, whenever the other one is measured its value will be found to be 1, and vice versa), and we have the capability to force either one of them to be measured as a 0 or a 1, then we could use one of the qubits to send a bit of information (we choose its value) instantly from one qubit to the other one, in either direction.

I know that this is a solved (and tricky) problem where quantum mechanics actually does not allow FTL communication.

(Which is also a rule compatible with Relativity.)

So I think quantum mechanics must be allowing us to force a qubit to any 0 or 1 value we want, but as long as we still don't know what the value of the other qubit (its twin?) will be.

Then how we may solve what is called Integer Factorization problem quickly using a quantum computer?

Assume we multipled two unknown positive integers (qubit register values), calculated the result in a third register.

Then we forced the third qubit register value to the multiplication result number (which we knew already).

Assume we still preserved the values of the input registers during the whole multiplication calculation maybe by copying them to another two separate qubit registers before the multiplication

(which would create entanglement in between copied register qubits).

I think if we forced the (multiplication) calculation result register to the value we know, then values of the two input value registers (which we preserved) should get set to the only possible input prime number values.

(Which we can measure (collapse), those input register qubits, anytime we want, and learn what were their (initially unknown) values.)

If quantum computers really work like this, can they also be explained by Hidden Variables Hypothesis?

I think it claims, when we measure the previously unknown value of a qubit, we just finding out what certain 0 or 1 value the qubit was set in the past.

Realize that if that was true then quantum computers would not work the way we need.

Whether the multiplication calculation result register was set this value or that, the input register values would not be effected by it, since their unknown (but certain) values would never change.

So it looks like quantum computers allow us to send information instantly across any distance in space (remember, the instant we (force) set any qubit of output register, that operation instantly sets the value of corresponding qubit(s) in the input registers), as long as that information is indeterminate.

But cannot we also think the input registers as the past and the output register as the future?

When we (force) set the output register in the end of the multiplication calculation, isn't that can be interpreted as sending information to the past (instantly)?

And if in the end, we (force) set the value of any input registers,

isn't that can be interpreted as sending information to the future (instantly)?

If so then these would mean we can send information across space and(/or?) time instantly but as long as it is indeterminate information.

I think quantum mechanics requires that if the universe is some kind of cellular automata,

then its cells must be individual qubits or a qubit register or a set of multiple qubit registers.

(Probably all cells identical for the whole universe.)

(Also each qubit cell would be connected with N neighbors probably.

Could it even be that all cells are directly connected with all other cells?)

Also since it looks like each quantum register of N qubits is capable of making a choice between 2^N different possible answers in an instant, it could be said that each quantum register is capable of 2^N calculations (at least) in an instant.

Then it would mean each qubit is capable of 2^N/N calculations per time step of the computer.

Since N can be large as we want in theory, that means each qubit is capable of infinite number of calculations (value evaluations) per time step.

(I think path integrals used to calculate particle actions also indicate that each particle seems to be evaluating infinite number of possibilities at each time step.)

Which brings the mind the question, are quantum computers the ultimate (most powerful) computers possible?

(Which are computers capable of infinite number of calculations at each time step, theoretically.)

I think practically the answer looks like yes (because of the quantum theory).

But I think theoretically an even more powerful computer maybe be possible (but definitely not in our universe) that does an infinite number of calculations in 0 time step.

(Which I think some religions imply about what is God is capable of,

by saying God can create anything of any size and complexity in an instant, without needing to spend any time on the problem.)

## Saturday, July 1, 2017

### FLATLAND AND CURVATURE OF UNIVERSE

Wikipedia says "Flatland: A Romance of Many Dimensions is a satirical novella by the English schoolmaster Edwin Abbott Abbott, first published in 1884 by Seeley & Co. of London."

I never actually read it but I know it really helps people to understand geometric dimensions.

I do not know if these ideas below ever occurred to anyone:

I think it does not matter if flatland (universe) had no curvature at all anywhere

or it had a (positive or negative) uniform global (universal) curvature of any (constant) value,

flatland would look flat to flatland people (any observer living in flatland who has the same dimensions as flatland).

Now imagine that flatland is the 2D surface of a 3D sphere which has a uniform positive curvature everywhere (which is 1/r^2).

The question is this:

Can Flatland people really measure the curvature of their universe or not?

I think most people maybe assuming that, since sum of internal angles of a triangle in Flatland would be greater than 180 degrees,

Flatland people could easily measure the (global and uniform) curvature of their universe.

But can they really do that, just like a 3D being would easily see that

sum of internal angles of a triangle in Flatland is really greater than 180 degrees obviously?

I think the answer is no.

Imagine a Flatland observer sends a laser beam straight ahead.

Imagine the view of the Flatland observer is like a camera moving along the photons of the laser beam, in front of the beginning (head) of the beam.

Imagine as the beam and camera is moving both would be following the curvature of their universe on the path of the beam.

If there are stars in Flatland universe and laser beam is moving towards stars, the view of the camera would be always a flat universe.

Realize that if the universal curvature of Flatland universe is uniform everywhere,

the Flatland observer would always think their universe is flat.

And I think this would be still the same no matter how many dimensions the Flatland universe really has.

But also think Flatland people can still measure non-uniform curvatures in their universe, like curvature created by the mass of a star.

So I think it is quite natural that global curvature of our universe looks very close to being flat.

If our universe started with a Big Bang from a point (singularity or a small spherical object?),

and uniformly expanding ever since, and if we combine Occam's Razor with observations of our universe,

I think the simplest global geometry for our universe would be a 3D spherical surface on a 4D sphere.

And just like a 2D spherical surface is curved in 3rd dimension of space,

our universe must be 3D spherical surface of 3 space dimensions curved in 4th dimension (time).

If so that implies we can calculate the global curvature of our universe at any time as 1/r^3.

(Where r is the radius of our universe at that time.)

Wikipedia says distance to Big Bang is "13.799±0.021 billion years" in time.

But I think if we take expansion of the universe since the Big Bang into consideration,

distance in space (radius of universe) is currently about 46 billion light-years.

This implies the current global curvature of our universe must be 1/(46 billion light-years)^3.

As for how to make sense of our visible universe around us we observe:

Imagine when we look at any direction in our universe, depending on how far we look,

for each point in the universe, we see light left from that point, that far in time.

(And current actual distance (in space) to that point can be calculated by applying what we know for expansion of the universe.)

I never actually read it but I know it really helps people to understand geometric dimensions.

I do not know if these ideas below ever occurred to anyone:

I think it does not matter if flatland (universe) had no curvature at all anywhere

or it had a (positive or negative) uniform global (universal) curvature of any (constant) value,

flatland would look flat to flatland people (any observer living in flatland who has the same dimensions as flatland).

Now imagine that flatland is the 2D surface of a 3D sphere which has a uniform positive curvature everywhere (which is 1/r^2).

The question is this:

Can Flatland people really measure the curvature of their universe or not?

I think most people maybe assuming that, since sum of internal angles of a triangle in Flatland would be greater than 180 degrees,

Flatland people could easily measure the (global and uniform) curvature of their universe.

But can they really do that, just like a 3D being would easily see that

sum of internal angles of a triangle in Flatland is really greater than 180 degrees obviously?

I think the answer is no.

Imagine a Flatland observer sends a laser beam straight ahead.

Imagine the view of the Flatland observer is like a camera moving along the photons of the laser beam, in front of the beginning (head) of the beam.

Imagine as the beam and camera is moving both would be following the curvature of their universe on the path of the beam.

If there are stars in Flatland universe and laser beam is moving towards stars, the view of the camera would be always a flat universe.

Realize that if the universal curvature of Flatland universe is uniform everywhere,

the Flatland observer would always think their universe is flat.

And I think this would be still the same no matter how many dimensions the Flatland universe really has.

But also think Flatland people can still measure non-uniform curvatures in their universe, like curvature created by the mass of a star.

So I think it is quite natural that global curvature of our universe looks very close to being flat.

If our universe started with a Big Bang from a point (singularity or a small spherical object?),

and uniformly expanding ever since, and if we combine Occam's Razor with observations of our universe,

I think the simplest global geometry for our universe would be a 3D spherical surface on a 4D sphere.

And just like a 2D spherical surface is curved in 3rd dimension of space,

our universe must be 3D spherical surface of 3 space dimensions curved in 4th dimension (time).

If so that implies we can calculate the global curvature of our universe at any time as 1/r^3.

(Where r is the radius of our universe at that time.)

Wikipedia says distance to Big Bang is "13.799±0.021 billion years" in time.

But I think if we take expansion of the universe since the Big Bang into consideration,

distance in space (radius of universe) is currently about 46 billion light-years.

This implies the current global curvature of our universe must be 1/(46 billion light-years)^3.

As for how to make sense of our visible universe around us we observe:

Imagine when we look at any direction in our universe, depending on how far we look,

for each point in the universe, we see light left from that point, that far in time.

(And current actual distance (in space) to that point can be calculated by applying what we know for expansion of the universe.)

## Wednesday, June 28, 2017

### A Proposal for Solving P versus NP Problem

I don't know if this idea for solving P versus NP problem was suggested by anyone before or not:

I think history implies there maybe no general polynomial time algorithm for solving NP-complete problems.

But if there is no single algorithm then how about m algorithms?

Then the question is this: m is finite (P=NP) or infinite (P<>NP) (or this maybe better: P<NP)?

Now imagine that each of those m (heuristic?) algorithms can efficiently solve maybe an infinite number of cases.

But each always leave out an also infinite number of cases.

So the question is, if we keep designing new (heuristic) algorithms for the cases left out by the previous algorithms,

would we need to design an infinite number of algorithms or not?

And if finding the answer theoretically is not possible, cannot we search for it experimentally?

(If all NP-complete problems can be converted to each other (in polynomial time),

and we have lots of different (heuristic) algorithms for each,

is not that mean, we could see all those algorithms as different members of the set of m algorithms we seek?)

Assume N is the number of elements in the input problem for a particular NP-complete problem we chose.

Assume we started with N = 1 and increasing the N one by one.

Assume for each N, we enumerated all possible problem setups (initial conditions)

and tried out each of the (heuristic) algorithms we have (so far) on each of those problem setups.

Imagine we filtered out all problem setups which we could solve efficiently (in polynomial time) by at least 1 (heuristic) algorithm we have.

Obviously if for any N, there are still any setups which we have no algorithm for, that would mean we don't have the full solution (for sure)

and so we need at least one more algorithm to solve those cases.

(So our search would end at that point, unless/until we find the algorithm(s) we need to solve all remaining cases for that N.)

The goal of this experimental search would be to record M (number of algorithms) for each (increasing) N.

And try to find of what is the general trend for M.

(Is it increasing linearly or exponentially or changing some other way we could recognize.)

But isn't it that, heuristic algorithms we have, give us near optimal solutions, not optimal ones for each input case?

(But isn't still there are input cases they give optimal solutions which we could use?)

Even if so we could also accept near optimal solutions (within a certain (error) threshold (percentage)) for each input setup case.

But then the solution (final set of m (heuristic) algorithms) would be a lesser form of solution (for P vs NP problem) even if we find it.

But also if we have the solution (final set of m (heuristic) algorithms where each is polynomial time),

can we really say we have a general polynomial time solution?

For any given setup case of the problem, how we can select any one of m algorithms we have that can solve it (in polynomial time)?

That selection algorithm itself (if exist) is polynomial time or not?

I think history implies there maybe no general polynomial time algorithm for solving NP-complete problems.

But if there is no single algorithm then how about m algorithms?

Then the question is this: m is finite (P=NP) or infinite (P<>NP) (or this maybe better: P<NP)?

Now imagine that each of those m (heuristic?) algorithms can efficiently solve maybe an infinite number of cases.

But each always leave out an also infinite number of cases.

So the question is, if we keep designing new (heuristic) algorithms for the cases left out by the previous algorithms,

would we need to design an infinite number of algorithms or not?

And if finding the answer theoretically is not possible, cannot we search for it experimentally?

(If all NP-complete problems can be converted to each other (in polynomial time),

and we have lots of different (heuristic) algorithms for each,

is not that mean, we could see all those algorithms as different members of the set of m algorithms we seek?)

Assume N is the number of elements in the input problem for a particular NP-complete problem we chose.

Assume we started with N = 1 and increasing the N one by one.

Assume for each N, we enumerated all possible problem setups (initial conditions)

and tried out each of the (heuristic) algorithms we have (so far) on each of those problem setups.

Imagine we filtered out all problem setups which we could solve efficiently (in polynomial time) by at least 1 (heuristic) algorithm we have.

Obviously if for any N, there are still any setups which we have no algorithm for, that would mean we don't have the full solution (for sure)

and so we need at least one more algorithm to solve those cases.

(So our search would end at that point, unless/until we find the algorithm(s) we need to solve all remaining cases for that N.)

The goal of this experimental search would be to record M (number of algorithms) for each (increasing) N.

And try to find of what is the general trend for M.

(Is it increasing linearly or exponentially or changing some other way we could recognize.)

But isn't it that, heuristic algorithms we have, give us near optimal solutions, not optimal ones for each input case?

(But isn't still there are input cases they give optimal solutions which we could use?)

Even if so we could also accept near optimal solutions (within a certain (error) threshold (percentage)) for each input setup case.

But then the solution (final set of m (heuristic) algorithms) would be a lesser form of solution (for P vs NP problem) even if we find it.

But also if we have the solution (final set of m (heuristic) algorithms where each is polynomial time),

can we really say we have a general polynomial time solution?

For any given setup case of the problem, how we can select any one of m algorithms we have that can solve it (in polynomial time)?

That selection algorithm itself (if exist) is polynomial time or not?

## Saturday, June 24, 2017

### What Black Holes Are Made Of?

I do not think Black Holes actually contain a singularity (a point of infinite density) at their center.

And so if they are made of particles then there is really only one option:

They must be made of so called Planck particles (which are the smallest and most energy dense particles theoretically).

Planck particles are thought to be extremely unstable but just consider that neutrons are also unstable but they are stable in a neutron star.

I think neutron stars are must be in some kind of fluid state (consider how two neutron stars would/could merge), so Black Holes also could be similar.

And so if they are made of particles then there is really only one option:

They must be made of so called Planck particles (which are the smallest and most energy dense particles theoretically).

Planck particles are thought to be extremely unstable but just consider that neutrons are also unstable but they are stable in a neutron star.

I think neutron stars are must be in some kind of fluid state (consider how two neutron stars would/could merge), so Black Holes also could be similar.

## Saturday, June 10, 2017

### What is the mechanism of Hawking Radiation?

For a long time I thought a pair of virtual particles gets created near event horizon (due to uncertainty principle).

Normally they would quickly destroy each other and they never create flashes of gamma photons because always one of the twin particles has positive and the other has negative mass/energy.

But if sometimes a particle with negative mass/energy falls into event horizon the other could fly away as radiation.

So the Black Hole would lose mass/energy over time.

I think this does not make sense because, wouldn't it also sometimes possible the particle with positive mass/energy fall into the Black Hole and cancel the mass/energy losses?

Another explanation I saw suggested the particle-anti-particle pairs created by gravitational energy near the event horizon,

(because the gravitational energy there is higher than (any kind of?) pair production energy), with both particles having positive mass/energy, one particle escapes as radiation.

And I think another explanation I saw suggested also similar pair production but later matter-anti-matter annihilation and production of outgoing (gamma) photons.

But if so, isn't Hawking Radiation should happen much much faster? (Because of the perfect efficiency of pair production.)

How about this explanation?

Imagine instead of pair production (twin particles with both positive mass/energy), virtual pair production (twin particles with one positive one negative mass/energy).

The particle with positive mass/energy attracted towards the Black Hole.

The particle with negative mass/energy repulsed away from the Black Hole.

The particles with positive energy sometimes create particle-anti-particle annihilations between themselves and create outgoing (gamma) photons.

But if so why the particles with negative energy could not do the same between themselves, create outgoing (gamma) photons with negative energy? (So that energy losses and gains would cancel out.)

Normally they would quickly destroy each other and they never create flashes of gamma photons because always one of the twin particles has positive and the other has negative mass/energy.

But if sometimes a particle with negative mass/energy falls into event horizon the other could fly away as radiation.

So the Black Hole would lose mass/energy over time.

I think this does not make sense because, wouldn't it also sometimes possible the particle with positive mass/energy fall into the Black Hole and cancel the mass/energy losses?

Another explanation I saw suggested the particle-anti-particle pairs created by gravitational energy near the event horizon,

(because the gravitational energy there is higher than (any kind of?) pair production energy), with both particles having positive mass/energy, one particle escapes as radiation.

And I think another explanation I saw suggested also similar pair production but later matter-anti-matter annihilation and production of outgoing (gamma) photons.

But if so, isn't Hawking Radiation should happen much much faster? (Because of the perfect efficiency of pair production.)

How about this explanation?

Imagine instead of pair production (twin particles with both positive mass/energy), virtual pair production (twin particles with one positive one negative mass/energy).

The particle with positive mass/energy attracted towards the Black Hole.

The particle with negative mass/energy repulsed away from the Black Hole.

The particles with positive energy sometimes create particle-anti-particle annihilations between themselves and create outgoing (gamma) photons.

But if so why the particles with negative energy could not do the same between themselves, create outgoing (gamma) photons with negative energy? (So that energy losses and gains would cancel out.)

## Monday, June 5, 2017

### Automated Stock Trading Methods

Single stock based:

Sell when the stock is up N percent and buy it back when it is down N percent?

Sell when a short term moving average crosses down a long term moving average and buy it back when the short term moving average crosses up the long term moving average?

Sell when the price-earnings ratio is up N percent and buy it back when it is down N percent?

(Multiple indicators similar to PE ratio can be multiplied together and can be used in similar way too.)

Multi stock based: (More suitable for index funds?)

Multiple indicators similar to PE ratio can be multiplied together and can be used to calculate a score for each different stock.

(If an (normalized) indicator is higher the better kind then multiply, lower the better kind then divide.)

Then sell N worst ones and buy N best ones every M days?

Sell when the stock is up N percent and buy it back when it is down N percent?

Sell when a short term moving average crosses down a long term moving average and buy it back when the short term moving average crosses up the long term moving average?

Sell when the price-earnings ratio is up N percent and buy it back when it is down N percent?

(Multiple indicators similar to PE ratio can be multiplied together and can be used in similar way too.)

Multi stock based: (More suitable for index funds?)

Multiple indicators similar to PE ratio can be multiplied together and can be used to calculate a score for each different stock.

(If an (normalized) indicator is higher the better kind then multiply, lower the better kind then divide.)

Then sell N worst ones and buy N best ones every M days?

## Saturday, June 3, 2017

### Super-intelligence vs Genius

Common criticism of IQ tests is that they are unreliable.

(I don't know how many times I saw media declaring a (super-intelligent) child a "genius" who is smarter than even Einstein.)

Why people who get really high scores in IQ tests don't become new Einsteins?

Obviously IQ tests are highly accurate for predicting academic success but not much else.

What are we missing?

I think first we should look at what is the difference between super-intelligence and genius.

Super-intelligent people are super-fast learners and super-fast problem solvers/calculators (like a human-computer hybrid).

On the other hand, it is well-known that Einstein had great difficulty learning new math for his whole life.

(I think if we analyze famous scientists/inventors in history we would find some are genius and some are super-intelligent people.)

Think about the difference between:

intelligence vs wisdom

book-smart vs street-smart

problem solving ability vs creativity

problem solving ability vs reasoning (and argumentation) ability

problem solving ability vs a strong sense of justice

So I think intelligence and genius (wisdom) are actually separate mental properties and they need to be measured separately.

(I don't know how many times I saw media declaring a (super-intelligent) child a "genius" who is smarter than even Einstein.)

Why people who get really high scores in IQ tests don't become new Einsteins?

Obviously IQ tests are highly accurate for predicting academic success but not much else.

What are we missing?

I think first we should look at what is the difference between super-intelligence and genius.

Super-intelligent people are super-fast learners and super-fast problem solvers/calculators (like a human-computer hybrid).

On the other hand, it is well-known that Einstein had great difficulty learning new math for his whole life.

(I think if we analyze famous scientists/inventors in history we would find some are genius and some are super-intelligent people.)

Think about the difference between:

intelligence vs wisdom

book-smart vs street-smart

problem solving ability vs creativity

problem solving ability vs reasoning (and argumentation) ability

problem solving ability vs a strong sense of justice

So I think intelligence and genius (wisdom) are actually separate mental properties and they need to be measured separately.

## Tuesday, May 30, 2017

### What is the shape of the universe?

Curvature of the universe seems flat but that just means its curvature is uniform.

Visible universe radius seems 13.7 billion years which is equivalent to 46 billion light years when expansion of the universe also considered.

All directions we look, start of the big bang (singularity?) is the border of the universe (which is a border in time).

The simplest geometry for the universe would be the surface of a sphere, but instead of 2 dimensional space curved in third space dimension, it is 3 dimensional space curved in time dimension.

(This means curvature of the universe can actually be calculated!)

Even though universe is physically similar from any point inside of it, from an observer point of view from anywhere in the universe,

it is like the observer and the point of big bang are located at the opposite poles of a sphere.

(Light rays sent in any direction would converge on the (same) big bang point at the other pole of the sphere.)

(But if so why Cosmic Microwave Background is not the same in all directions? It maybe explained by CMB origin is some time later than the big bang origin. That means CMB is like a circle around the big bang (pole) point. So light rays originated from the observer pole would hit the CMB circle at different points.

(Also density of the universe along each light ray could be different. Meaning CMB could be actually same in all directions.))

Also if the shape of the universe is really as described above,

that means size of the visible universe and size of whole universe must be actually equal!

Visible universe radius seems 13.7 billion years which is equivalent to 46 billion light years when expansion of the universe also considered.

All directions we look, start of the big bang (singularity?) is the border of the universe (which is a border in time).

The simplest geometry for the universe would be the surface of a sphere, but instead of 2 dimensional space curved in third space dimension, it is 3 dimensional space curved in time dimension.

(This means curvature of the universe can actually be calculated!)

Even though universe is physically similar from any point inside of it, from an observer point of view from anywhere in the universe,

it is like the observer and the point of big bang are located at the opposite poles of a sphere.

(Light rays sent in any direction would converge on the (same) big bang point at the other pole of the sphere.)

(But if so why Cosmic Microwave Background is not the same in all directions? It maybe explained by CMB origin is some time later than the big bang origin. That means CMB is like a circle around the big bang (pole) point. So light rays originated from the observer pole would hit the CMB circle at different points.

(Also density of the universe along each light ray could be different. Meaning CMB could be actually same in all directions.))

Also if the shape of the universe is really as described above,

that means size of the visible universe and size of whole universe must be actually equal!

## Sunday, May 28, 2017

### Future of Computers

I do not know what the future of computer world actually be like but I have a bunch of ideas about what would be the best.

What should be the future of computer world?

Physically they keep getting smaller and/or more capable.

Speed and memory keep getting increased.

Graphics and networking keeps getting better.

(But Moore's Law ending?)

16, 32, 64,...-core (Network-On-A-Chip) processors?

I think number of processor cores should be increased as much as possible.

It is true that not all software could take advantage of it but there are plenty of computing tasks which are what is called embarrassingly parallel.

128-bit, 256-bit,... processors?

I think going beyond 64-bit processors would also help computers to get faster and faster.

There are always plenty of tasks a processor do which can be done faster by processing more bytes at the same time.

I think future of graphics cards should be a standard (voxel-based?) real-time (3d) ray tracing GPU.

(Just like sound cards became standard (after reaching stereo CD quality?).)

Optical processors/computers?

(Will most computers become optical someday?)

Quantum computers?

If quantum computers become common and cheap someday, can they replace all other computers?

I think they look like more fit for (hard) problem solvers than general purpose computers.

If so they may always stay separate than general purpose computers and/or they may become coprocessors in all computers.

ANN (Artificial Neural Network) coprocessors?

(Back in the 90s computers had separate math coprocessors (and/or DSP?).)

I think it is a good idea to add ANN coprocessors to computers to handle tasks which require human-like learning.

How about also adding Genetic Algorithm and/or Simulated Annealing coprocessors? (A quantum coprocessors could do both?)

How about creating standard RISC instruction sets for 8/16/32/64...-bit processors and be done with it?

What is the ultimate CISC processor? picoJava?

picoJava-like special processor design for each common programming language?

(FPGA coprocessor that can switch to any (high-level) language anytime?)

What should be the future of programming languages?

I think expressiveness is the most important characteristic of a programming language.

(I had read implementing the same algorithm in Python requires typing about 1/6 of number of characters compared to C++/Java.)

I think the most advanced programming language is the one that is closest to pseudocode.

Can AI replace programmers?

I think not. But I think AI can help a lot to programmers someday.

Imagine a programmer writes pseudocode and AI tries to convert it to a software in any target programming language.

Imagine AI analyses pseudocode and asks programmer to clarify anything that looks unclear.

Imagine AI and programmer working together to debug software.

What should be the future of computer world?

Physically they keep getting smaller and/or more capable.

Speed and memory keep getting increased.

Graphics and networking keeps getting better.

(But Moore's Law ending?)

16, 32, 64,...-core (Network-On-A-Chip) processors?

I think number of processor cores should be increased as much as possible.

It is true that not all software could take advantage of it but there are plenty of computing tasks which are what is called embarrassingly parallel.

128-bit, 256-bit,... processors?

I think going beyond 64-bit processors would also help computers to get faster and faster.

There are always plenty of tasks a processor do which can be done faster by processing more bytes at the same time.

I think future of graphics cards should be a standard (voxel-based?) real-time (3d) ray tracing GPU.

(Just like sound cards became standard (after reaching stereo CD quality?).)

Optical processors/computers?

(Will most computers become optical someday?)

Quantum computers?

If quantum computers become common and cheap someday, can they replace all other computers?

I think they look like more fit for (hard) problem solvers than general purpose computers.

If so they may always stay separate than general purpose computers and/or they may become coprocessors in all computers.

ANN (Artificial Neural Network) coprocessors?

(Back in the 90s computers had separate math coprocessors (and/or DSP?).)

I think it is a good idea to add ANN coprocessors to computers to handle tasks which require human-like learning.

How about also adding Genetic Algorithm and/or Simulated Annealing coprocessors? (A quantum coprocessors could do both?)

How about creating standard RISC instruction sets for 8/16/32/64...-bit processors and be done with it?

What is the ultimate CISC processor? picoJava?

picoJava-like special processor design for each common programming language?

(FPGA coprocessor that can switch to any (high-level) language anytime?)

What should be the future of programming languages?

I think expressiveness is the most important characteristic of a programming language.

(I had read implementing the same algorithm in Python requires typing about 1/6 of number of characters compared to C++/Java.)

I think the most advanced programming language is the one that is closest to pseudocode.

Can AI replace programmers?

I think not. But I think AI can help a lot to programmers someday.

Imagine a programmer writes pseudocode and AI tries to convert it to a software in any target programming language.

Imagine AI analyses pseudocode and asks programmer to clarify anything that looks unclear.

Imagine AI and programmer working together to debug software.

## Sunday, May 21, 2017

### Is mathematics invention or discovery?

I think although natural numbers and few basic kinds of geometry, basic polynomials could be seen as inventions, on the other hand, real/complex/quaternion/octonion/sedenion numbers, decimal/hexadecimal/binary/octal number systems,

infinite family of arithmetic operations (addition, multiplication, power, tetration, ... and their inverses), fractal geometry, prime numbers etc all look like discoveries.

I think evidence for discovery is much more than evidence for invention but ultimately it maybe impossible to prove either side of the argument.

I think math is discovery and so math has its own existence but it is truly an abstract existence.

If math is an abstract existence then could any mathematical objects come into real existence by itself?

I think not.

Is mathematics infinite (when trivially infinite stuff taken out)?

For example it is trivial that each type of polynomial has infinite degrees (and dimensions (number of unknowns)) but is the total number of non-trivially different kinds of polynomials infinite?

What if we assume all kinds of possible polynomials as just one part of math?

Are the total number of such parts of math infinite?

Is Physics infinite?

In other words, how many non-trivially different universes mathematically possible that could support life/(human-like) intelligence?

I think it is obvious that if we change number of dimensions of the universe and found that universe could support life/intelligence

that universe must be counted as a non-trivially different universe

but what if we change one of basic constants of physics just a tiny bit, should we count that as a non-trivially different universe also? If not, then how much difference (as a percentage maybe)

for which basic constant of physics should be counted as a non-trivially different universe?

And all such different universes, which still following laws of physics of this universe, are the only possibilities?

What if we allow any kind of physical laws? How many non-trivially different sets of physical laws (for a universe that could support life/intelligence) possible?

(Of course, is it even theoretically/practically possible to mathematically determine if a given set of physical laws for a universe could support life/intelligence (when using computer simulations for experimentation included)?

How we could test if any given universe (set of physical laws) could support life and/or intelligence?

There is a an idea in computer science for testing equivalency.

For example it is known that all kinds of (completely different looking) NP-complete problems are actually equivalent because it is known how each one can be converted to one of the others.

Also it is known all kinds of theoretical computers are equivalent because all can be converted to a (Universal) Turing Machine.

Can we use the same idea for testing if any given universe is equivalent to our universe?

And if a universe is equivalent to ours, would not it mean that universe could also support life and intelligence?

Also there maybe other ways to test a universe for equivalency:

If we had a computer simulation of a (simplest) kind of life (living cells) then we could try to convert that simulation to use the physical laws of any given universe.

If we had a computer simulation of a (human-like) AI then we could try to convert that simulation to use the physical laws of any given universe, also.

And if we find that each simulation still works, would not it mean that universe could also support life and intelligence?

Also if what we trying to convert are computer simulations, what if we just design a (physical) computer in each universe we want to test? Wouldn't that be enough?)

Is computer science infinite?

(How many real/theoretical non-trivially different computer designs (hardware/software) possible?

Are all have equal power/ability (which is universal calculation)?)

Is chemistry infinite? (How many non-trivially different elements/molecules/chemical reactions possible?)

Is biology infinite? (How many non-trivially different species possible?)

I think, in a similar way, we could ask if any given science is infinite or not.

If any given science is infinite, is that mean humanity can never understand it as a whole?

infinite family of arithmetic operations (addition, multiplication, power, tetration, ... and their inverses), fractal geometry, prime numbers etc all look like discoveries.

I think evidence for discovery is much more than evidence for invention but ultimately it maybe impossible to prove either side of the argument.

I think math is discovery and so math has its own existence but it is truly an abstract existence.

If math is an abstract existence then could any mathematical objects come into real existence by itself?

I think not.

Is mathematics infinite (when trivially infinite stuff taken out)?

For example it is trivial that each type of polynomial has infinite degrees (and dimensions (number of unknowns)) but is the total number of non-trivially different kinds of polynomials infinite?

What if we assume all kinds of possible polynomials as just one part of math?

Are the total number of such parts of math infinite?

Is Physics infinite?

In other words, how many non-trivially different universes mathematically possible that could support life/(human-like) intelligence?

I think it is obvious that if we change number of dimensions of the universe and found that universe could support life/intelligence

that universe must be counted as a non-trivially different universe

but what if we change one of basic constants of physics just a tiny bit, should we count that as a non-trivially different universe also? If not, then how much difference (as a percentage maybe)

for which basic constant of physics should be counted as a non-trivially different universe?

And all such different universes, which still following laws of physics of this universe, are the only possibilities?

What if we allow any kind of physical laws? How many non-trivially different sets of physical laws (for a universe that could support life/intelligence) possible?

(Of course, is it even theoretically/practically possible to mathematically determine if a given set of physical laws for a universe could support life/intelligence (when using computer simulations for experimentation included)?

How we could test if any given universe (set of physical laws) could support life and/or intelligence?

There is a an idea in computer science for testing equivalency.

For example it is known that all kinds of (completely different looking) NP-complete problems are actually equivalent because it is known how each one can be converted to one of the others.

Also it is known all kinds of theoretical computers are equivalent because all can be converted to a (Universal) Turing Machine.

Can we use the same idea for testing if any given universe is equivalent to our universe?

And if a universe is equivalent to ours, would not it mean that universe could also support life and intelligence?

Also there maybe other ways to test a universe for equivalency:

If we had a computer simulation of a (simplest) kind of life (living cells) then we could try to convert that simulation to use the physical laws of any given universe.

If we had a computer simulation of a (human-like) AI then we could try to convert that simulation to use the physical laws of any given universe, also.

And if we find that each simulation still works, would not it mean that universe could also support life and intelligence?

Also if what we trying to convert are computer simulations, what if we just design a (physical) computer in each universe we want to test? Wouldn't that be enough?)

Is computer science infinite?

(How many real/theoretical non-trivially different computer designs (hardware/software) possible?

Are all have equal power/ability (which is universal calculation)?)

Is chemistry infinite? (How many non-trivially different elements/molecules/chemical reactions possible?)

Is biology infinite? (How many non-trivially different species possible?)

I think, in a similar way, we could ask if any given science is infinite or not.

If any given science is infinite, is that mean humanity can never understand it as a whole?

## Tuesday, May 9, 2017

### WHAT IS ARROW OF TIME?

I had read that laws of physics are symmetric in time.

If so then why we always see time moves forward?

I have the impression that most physicists think arrow of time must be caused by entropy.

Because it seems generally entropy is always increasing in the universe.

I know that entropy is a measure of disorder and it seems increasing the temperature of any gas/liquid/solid increases its entropy.

If so can we say increasing temperature of anything must be slowing down time for that thing?

(Can we try to measure slowing of time by keep heating a transparent gas and keep measuring speed of light when light passing in that gas?

Or are there other kinds of systems we can use as clock when getting heated up?)

My guess is answer is no that time would not slow down with increasing temperature.

Could there be another explanation for arrow of time?

If we are moving forward in time what keeps us from moving backward in time?

Is it Grandfather Paradox (which must apply to anything traveling backwards in time)?

Or is it what is called causality (cause and effect)?

(I think Grandfather Paradox is just another description of causality or more like a special case of it.)

I think causality is the real reason why we cannot move backwards in time, and causality itself is the arrow of time.

So I think if any physical system/experiment can break causality, then there will be something moving backwards in time in there.

If so then why we always see time moves forward?

I have the impression that most physicists think arrow of time must be caused by entropy.

Because it seems generally entropy is always increasing in the universe.

I know that entropy is a measure of disorder and it seems increasing the temperature of any gas/liquid/solid increases its entropy.

If so can we say increasing temperature of anything must be slowing down time for that thing?

(Can we try to measure slowing of time by keep heating a transparent gas and keep measuring speed of light when light passing in that gas?

Or are there other kinds of systems we can use as clock when getting heated up?)

My guess is answer is no that time would not slow down with increasing temperature.

Could there be another explanation for arrow of time?

If we are moving forward in time what keeps us from moving backward in time?

Is it Grandfather Paradox (which must apply to anything traveling backwards in time)?

Or is it what is called causality (cause and effect)?

(I think Grandfather Paradox is just another description of causality or more like a special case of it.)

I think causality is the real reason why we cannot move backwards in time, and causality itself is the arrow of time.

So I think if any physical system/experiment can break causality, then there will be something moving backwards in time in there.

## Sunday, May 7, 2017

### TIME TRAVEL

Is time travel really possible?

I think science says time travel to future is possible but to past is more likely not possible.

I think traveling to past would break causality (which seems one of foundations of reality around us)

because of the Grandfather Paradox (which should be true even for subatomic particles; not just humans).

I think this is also called Cosmic Censorship (preventing events out of causality from happening).

But I also had read about some quantum particle experiments which could be interpreted as future events can have an effect in the past.

I don't know enough about quantum mechanics to fully support that interpretation but I think it is possible.

I had also read that anti-particles could be interpreted as particles moving backwards in time.

I have the same thought on that interpretation also.

We could also consider what would happen if we traveled to the past anyway.

I think first of all we could be creating a Butterfly Effect on the weather which would change all future weather of our world, over time.

Think about how many wars in history won by weaker side because of the bad weather.

(I had read Mongols who conquered China tried twice to conquer Japan but failed because of the bad weather (a typhoon?).

Napoleon failed at conquering Russia because of the bad winter?)

Also think about how different weather would change history of traffic accidents and/or crimes, even daily routine of countless people.

I had read a counter argument against Butterfly Effect which was saying most (small scale) atmospheric disturbances actually would get dumped down.

If true then what scale disturbances do not cause Butterfly Effect must be determined, to know how we could preserve to future history, if we traveled to past.

Also I think there maybe psychological and/or sociological counterparts to Butterfly Effect (on weather).

Can we see daily thought history of each person like a dynamical system similar to an atmospherical system?

If we change daily thought history of a person in the past, would that can change whole future daily thought histories of that person?

If so, can that change propagate to other people and eventually to all humans of the future?

How about creating sociological disturbances like starting a new fashion, new words/expressions of language?

How about creating scientific and/or technological disturbances like bringing advanced knowledge/tech from the future?

I think Butterfly Effect is much more general than most people would estimate.

Which means traveling to the past would be very risky if we want to preserve our own history.

Time travel to the future on the other hand seems okay for causality.

Relativity says if a spaceship can approach speed of light then time would slowdown for the ship and anyone inside of it.

That means anybody inside of the ship would age slower.

But I think by far the most practical way to travel to future would be suspended animation.

When all biological processes inside an organism slowdown that organism would age slower naturally.

It seems there are many organisms on earth already can do it.

I think it is also interesting that there are even religious stories about time travel to the future.

One of them is "Seven Sleepers" in Christianity and Islam.

Another is the story of a prophet(?) in Judaism/Islam(/Christianity?) who "slept" for decades and came back to his town/city.

I think science says time travel to future is possible but to past is more likely not possible.

I think traveling to past would break causality (which seems one of foundations of reality around us)

because of the Grandfather Paradox (which should be true even for subatomic particles; not just humans).

I think this is also called Cosmic Censorship (preventing events out of causality from happening).

But I also had read about some quantum particle experiments which could be interpreted as future events can have an effect in the past.

I don't know enough about quantum mechanics to fully support that interpretation but I think it is possible.

I had also read that anti-particles could be interpreted as particles moving backwards in time.

I have the same thought on that interpretation also.

We could also consider what would happen if we traveled to the past anyway.

I think first of all we could be creating a Butterfly Effect on the weather which would change all future weather of our world, over time.

Think about how many wars in history won by weaker side because of the bad weather.

(I had read Mongols who conquered China tried twice to conquer Japan but failed because of the bad weather (a typhoon?).

Napoleon failed at conquering Russia because of the bad winter?)

Also think about how different weather would change history of traffic accidents and/or crimes, even daily routine of countless people.

I had read a counter argument against Butterfly Effect which was saying most (small scale) atmospheric disturbances actually would get dumped down.

If true then what scale disturbances do not cause Butterfly Effect must be determined, to know how we could preserve to future history, if we traveled to past.

Also I think there maybe psychological and/or sociological counterparts to Butterfly Effect (on weather).

Can we see daily thought history of each person like a dynamical system similar to an atmospherical system?

If we change daily thought history of a person in the past, would that can change whole future daily thought histories of that person?

If so, can that change propagate to other people and eventually to all humans of the future?

How about creating sociological disturbances like starting a new fashion, new words/expressions of language?

How about creating scientific and/or technological disturbances like bringing advanced knowledge/tech from the future?

I think Butterfly Effect is much more general than most people would estimate.

Which means traveling to the past would be very risky if we want to preserve our own history.

Time travel to the future on the other hand seems okay for causality.

Relativity says if a spaceship can approach speed of light then time would slowdown for the ship and anyone inside of it.

That means anybody inside of the ship would age slower.

But I think by far the most practical way to travel to future would be suspended animation.

When all biological processes inside an organism slowdown that organism would age slower naturally.

It seems there are many organisms on earth already can do it.

I think it is also interesting that there are even religious stories about time travel to the future.

One of them is "Seven Sleepers" in Christianity and Islam.

Another is the story of a prophet(?) in Judaism/Islam(/Christianity?) who "slept" for decades and came back to his town/city.

## Thursday, May 4, 2017

### A COMPARISON OF UNIVERSE IDEAS

IMHO:

Science says there is this universe but maybe there are others.

Judaism and Christianity say there is this universe we live and there is another where there is God, Heaven, Hell, Angels, Demons.

I think they also imply the other universe is on top of this one.

Islam says there are seven universes, starting with this one and ending with which there is God, Heaven, Hell, Angels.

I think it also implies this universe is bottom and the second one is on top of this one and so on.

I think for all three religions it either could be spherical/cubic (?) universes on top of each other or it could be spherical/cubic (?) concentric universes one inside another.

I think all three religions also say there are doors (gateways) between each universe guarded by angels.

I think Judaism and Christianity say God created everything and now watches from Heaven.

I think Islam says God both watching from Heaven but also God is nowhere.

Islam says space and time are creations of God just like everything else and God does not need them to exist.

Christianity says God created man in his own image.

Islam says God does not look like any of God's creations and moreover God does not have any shape or size and does not have any gender.

Islam says God actually makes everything happen (down to the smallest scale).

Islam says if God stopped (even for an instant), everything in existence would disappear to nothing (in an instant).

(Which is I think kind of a similar description to a computer simulation.

I had also read some muslims believed/believing, everything in existence is like dream (imagination?) of God.)

Science says there is this universe but maybe there are others.

Judaism and Christianity say there is this universe we live and there is another where there is God, Heaven, Hell, Angels, Demons.

I think they also imply the other universe is on top of this one.

Islam says there are seven universes, starting with this one and ending with which there is God, Heaven, Hell, Angels.

I think it also implies this universe is bottom and the second one is on top of this one and so on.

I think for all three religions it either could be spherical/cubic (?) universes on top of each other or it could be spherical/cubic (?) concentric universes one inside another.

I think all three religions also say there are doors (gateways) between each universe guarded by angels.

I think Judaism and Christianity say God created everything and now watches from Heaven.

I think Islam says God both watching from Heaven but also God is nowhere.

Islam says space and time are creations of God just like everything else and God does not need them to exist.

Christianity says God created man in his own image.

Islam says God does not look like any of God's creations and moreover God does not have any shape or size and does not have any gender.

Islam says God actually makes everything happen (down to the smallest scale).

Islam says if God stopped (even for an instant), everything in existence would disappear to nothing (in an instant).

(Which is I think kind of a similar description to a computer simulation.

I had also read some muslims believed/believing, everything in existence is like dream (imagination?) of God.)

## Monday, May 1, 2017

### ON THE THEORY OF EVOLUTION

IMHO:

I think I read enough about theory of evolution so far to understand its basic ideas at least.

It says new species evolve from their common ancestor over time through random mutations and natural selection by their environment conditions as they live.

The ones which show more success against environmental conditions get more chance to pass on their genes to future generations.

(I had also read a translation of the book The Selfish Gene.

I think the basic idea was the genes in each living creature doing everything they can to pass on to the next generation.

It seemed to be implying the genes are so smart which did not make sense to me. I did not do further research on it.)

It seems all living creatures in nature want to live as much as possible as long as conditions do not get too bad, at least.

Even single celled organisms seem trying to run away from dangerous adversaries or even trying to fight back if they have to.

I am guessing some of them would even work together to attack or defend if we consider what happens when an animal get sick for example.

How can they do these seemingly complex behaviors without any kind of brain?

But if we supposed to take theory of evolution as a scientific fact/law,

should not we ask if it is proven scientifically or not?

What is the proof for theory of evolution I do not know.

I did not actually try to find out so far either really.

I read many articles on biology over the years like on Scientific American, Popular Science, and some other popular internet science tech websites which all seem to accept evolution without questioning.

I think what is exactly considered as the proof is the fossil records, which seem to indicate as we go from oldest living organisms to newer ones they go from single celled organisms to multi-celled organisms and they go more and more complicated.

Also there are what it looks like older version never version of similar organisms in different layers of ground often in the same location.

Isn't this proof enough?

Not to me at least IMHO.

I think it could also be that God (assuming exists) chose to do it in stages.

(Or it could be an alien race?

But then we must face the question how those aliens came to life and started to evolve exactly?

Was theirs same kind of evolution as ours or not? If same then that would lead to an infinite loop of logic questions and answers.

Which is something could we accept as the answer? IMHO no.)

Maybe God wanted to it in a similar way to geoengineering a whole planet (after creating the universe in a similar step by step method).

Why not create everything all at once instantly?

Which maybe possible for God, isn't it?

If it was not possible then still God would be exist (and still would be powerful enough for us), isn't it?.

And it was possible then why not create the universe and earth all at once?

I think it is also still possible God just made a decision and chose to create the universe and earth in stages.

If so then whether that decision make sense for us (or not) maybe irrelevant from God's point of view, who knows?

Are there any other proof candidates for evolution?

I think everything else could also be explained by adaptations of organisms to their environmental conditions.

I think viable offspring rule is the main difference between each species.

Which seems to me more compatible with design idea than natural occurrence.

Imagine the hardships we run into when trying to combine computer software which we created, for example.

Both combining complex software and machines seems to be requiring a new design (to create a more complex machine).

I think explaining how exactly chemicals and conditions existed in early earth started life would be a good proof for evolution

(Of course any such explanation would need to be repeatable by experiment to be scientifically accepted.)

(I think a physically realistic (atomic scale) simulation would also be acceptable at least for smart people.

Besides of showing how life could started, how about making atomic scale simulations of any kind of living cells or single celled microorganisms on earth?

Of course we would expect those simulated cells to behave same way as the real ones to be sure of accuracy of the simulation.)

There was a news about creating artificial life sometime ago.

I think the procedure was replacing the whole genome of a living cell with an artificial genome.

How about a completely artificial cell that made from completely artificial parts?

Without that, would not questions could still linger around?

Another scientifically acceptable proof would be finding any kind of alien life even in microorganism level.

It looks like decades of search did not find any signs of alien life so far.

Will it ever be found? Who knows for certain really? Do we have a proof for it or just opinions?

I also think yet another definitive proof could be finding a half-human species on earth.

Something similar to Big Foot or Yeti for example, because they appear to be close to half-human creatures from supposed sightings and stories.

If evolution was true I would expect to see all kinds of half-this half-that creatures to the point of a continuum of species.

And also would expect all kinds of different individuals inside each species currently trying new abilities, limbs, organs etc.

(For example, I think there are individual genes (or groups of few) controlling how many arms, legs, how much muscle etc.)

Instead it looks like there is always a big barrier between all species preventing viable offspring.

Would not make more sense from the view point of evolution to have no barriers as much as possible?

How about more indirect proofs for evolution?

I think creating human-level AI would be an indirect proof for evolution.

Because it would prove human-like minds can be created artificially without any help from God.

Or how about humanity creating perfectly realistic virtual realities to live any way, again without any help from God?

How about proofs against evolution?

Could be that the incredible complexity and order in the universe and on earth, laws of physics, complexity of living creatures,

incredibly precise balance of everything etc counted as the proof of God?

How about all kinds fruits made by plants on earth?

Aren't they a huge energy expenditure for plants?

Aren't there much easier ways to use for those plants?

How about their highly varied complex designs?

Aren't they look like created especially for humans?

Why it is necessary to have proof anyway?

I think history of science is full of ideas which were strongly thought to be true for a long time but later turned out to be false.

I think Newton physics is a good example.

String theory could be another.

Their lesson is as long as there is no real proof for any idea/theory, it can still turn out to be wrong later.

Also I would like clarify that my goal here was to present impartial views and opinions for all sides.

Everybody is free to think whatever they want and free to believe whatever they want.

This even includes what new evidence(s) could come in the future.

People of this earth can interpret everything in different ways.

I do not think everybody would always agree on what is really a proof or evidence or a strong sign for what.

Even some people seem to clearly reject almost any kind of scientific proof or evidence.

## Sunday, April 30, 2017

### Some Personal Thoughts Open to Criticism

(All my published ideas are just my personal thoughts always open to criticism. But I reserve the right to not to respond.)

I think all great scientific and technological accomplishments should bring awards for people who did it, even after their death.

I think all kinds of computer software are equivalent to machines, just as any kind of machine is equivalent to a machine, also.

So software patents should be treated as designs of machines.

Just as any mechanical and/or electronic (and/or biologic) machine design would need to be creative enough to be non-obvious to an expert in the field same rule also should apply for software patents.

(But living creatures must not be legally counted as biological machines ever.)

Anything published on the internet should have the rights provided by the website published.

The creator of the work should be assumed to be accepted them.

If a website changes the rights then new rules should apply only to newer publishments (not to olders; not to re-publishments).

The original owners of works should have option to make them more public, but no less public.

I think all great scientific and technological accomplishments should bring awards for people who did it, even after their death.

I think all kinds of computer software are equivalent to machines, just as any kind of machine is equivalent to a machine, also.

So software patents should be treated as designs of machines.

Just as any mechanical and/or electronic (and/or biologic) machine design would need to be creative enough to be non-obvious to an expert in the field same rule also should apply for software patents.

(But living creatures must not be legally counted as biological machines ever.)

Anything published on the internet should have the rights provided by the website published.

The creator of the work should be assumed to be accepted them.

If a website changes the rights then new rules should apply only to newer publishments (not to olders; not to re-publishments).

The original owners of works should have option to make them more public, but no less public.

## Saturday, April 15, 2017

### BIG BANG

Universe maybe started not from a singularity (of infinite density?) nor a quantum fluctuation (how it can happen where even spacetime does not exists?)

but instead started from some kind of cosmic egg that contained three different kinds of energy that do not interact (and destroy each other):

Dark Energy, Dark Matter, Normal (positive) Energy.

And Dark Energy created (and still creating) spacetime, Dark Matter created the cosmic web, Normal (positive) Energy created matter, stars, galaxies which coalesced on the cosmic web.

but instead started from some kind of cosmic egg that contained three different kinds of energy that do not interact (and destroy each other):

Dark Energy, Dark Matter, Normal (positive) Energy.

And Dark Energy created (and still creating) spacetime, Dark Matter created the cosmic web, Normal (positive) Energy created matter, stars, galaxies which coalesced on the cosmic web.

### FREE WILL EXISTS OR NOT

Clearly concepts of free will, mind (or consciousness), human-like AI are closely related.

I think mind could be explained as mental machinery/tools selected/manipulated by free will (if exists).

But free will really exists or it is just an illusion?

I think creating human-like AI requires creating a mind and creating a mind requires creating free will.

But I don't think randomness coming from quantum mechanics and/or determinism coming from relativity can really explain free will (assuming it really exists).

I think proving free will exists maybe impossible but disproving it is definitely possible.

If we can create a human-like AI someday (that passes Turing test and all similar tests we can think of)

(whether by keep advancing today's AI systems (bottom-up approach) or analyzing a human brain and creating a computer simulation of it (top-down approach))

that would definitely disprove free will (by showing it is just an illusion).

But if that never happens and somehow becomes clear that it never will be no matter how advanced science and technology get,

I think only then we could conclude free will must really exists.

(But then it would also mean free will is created by something beyond the laws of physics of our universe.)

I think mind could be explained as mental machinery/tools selected/manipulated by free will (if exists).

But free will really exists or it is just an illusion?

I think creating human-like AI requires creating a mind and creating a mind requires creating free will.

But I don't think randomness coming from quantum mechanics and/or determinism coming from relativity can really explain free will (assuming it really exists).

I think proving free will exists maybe impossible but disproving it is definitely possible.

If we can create a human-like AI someday (that passes Turing test and all similar tests we can think of)

(whether by keep advancing today's AI systems (bottom-up approach) or analyzing a human brain and creating a computer simulation of it (top-down approach))

that would definitely disprove free will (by showing it is just an illusion).

But if that never happens and somehow becomes clear that it never will be no matter how advanced science and technology get,

I think only then we could conclude free will must really exists.

(But then it would also mean free will is created by something beyond the laws of physics of our universe.)

## Saturday, April 8, 2017

### Logical Fallacies must be a High School class

If we look at history of humankind since the beginning, there are so many examples of bad people (like dictators and demagogues) rising to power and manipulating masses to do bad things and causing big damage by using logical fallacies.

I have no doubt it also happens a lot everyday in smaller scales like in companies, schools, hospitals, stores, even in families.

Advertising industry also makes use of logical fallacies a lot.

Sometimes they used intentionally to manipulate people and sometimes it is just because no one with a good understanding of logical fallacies is around.

I think there are many more kinds of logical fallacies than most people realize.

(Of course many are just modified versions of some basic types.)

Each type of logical fallacy is like a software bug of human mind waiting to be exploited, just like software bugs in computers are used by viruses to take control and spread.

Also I think explaining logical fallacies to adults is never an easy task.

That is why I think logical fallacies must become a separate full time high school class, at least.

Of course starting to teach them even earlier would be much better.

K-12 students should come across testing for each and every kind of logical fallacy again and again with different examples until they have a good understanding of them all.

This is something extremely important for future of humankind!

https://en.wikipedia.org/wiki/Fallacy

https://en.wikipedia.org/wiki/List_of_fallacies

I have no doubt it also happens a lot everyday in smaller scales like in companies, schools, hospitals, stores, even in families.

Advertising industry also makes use of logical fallacies a lot.

Sometimes they used intentionally to manipulate people and sometimes it is just because no one with a good understanding of logical fallacies is around.

I think there are many more kinds of logical fallacies than most people realize.

(Of course many are just modified versions of some basic types.)

Each type of logical fallacy is like a software bug of human mind waiting to be exploited, just like software bugs in computers are used by viruses to take control and spread.

Also I think explaining logical fallacies to adults is never an easy task.

That is why I think logical fallacies must become a separate full time high school class, at least.

Of course starting to teach them even earlier would be much better.

K-12 students should come across testing for each and every kind of logical fallacy again and again with different examples until they have a good understanding of them all.

This is something extremely important for future of humankind!

https://en.wikipedia.org/wiki/Fallacy

https://en.wikipedia.org/wiki/List_of_fallacies

## Wednesday, April 5, 2017

### Ultimate Space Telescope

I think it is obvious that keep building bigger and bigger space telescopes one at a time for (exponentially) higher and higher costs is not ideal for long term future of astronomy.

Then what is the best solution?

I think it must be using a modular design that allows easy expansion; linearly, instead of exponentially.

Imagine a space telescope made of completely independent hexagonal prism shaped units.

Imagine each hexagonal prism unit also allows easy attachment to another copy from any of its 6 sides.

Imagine the first unit sent to space and so we already have a working telescope.

Then we send another copy and it is attached to the first one (using a drone robot?).

Then we send another copy and another and so on and our telescope keep getting bigger and bigger.

The cost would indeed increase linearly obviously.

(Actually cost of each copy should come down over time.)

Also a big advantage of such a telescope would be much easier and cheaper repairs, compared to a similar size single telescope.

But of course a group of small telescopes attached together would not automatically mean they would be equivalent to a large single telescope.

For that what is called "Aperture synthesis" can be used (https://en.wikipedia.org/wiki/Aperture_synthesis).

Or another solution could be making each unit a "Planar Fourier capture array" (https://en.wikipedia.org/wiki/Planar_Fourier_capture_array).

Then what is the best solution?

I think it must be using a modular design that allows easy expansion; linearly, instead of exponentially.

Imagine a space telescope made of completely independent hexagonal prism shaped units.

Imagine each hexagonal prism unit also allows easy attachment to another copy from any of its 6 sides.

Imagine the first unit sent to space and so we already have a working telescope.

Then we send another copy and it is attached to the first one (using a drone robot?).

Then we send another copy and another and so on and our telescope keep getting bigger and bigger.

The cost would indeed increase linearly obviously.

(Actually cost of each copy should come down over time.)

Also a big advantage of such a telescope would be much easier and cheaper repairs, compared to a similar size single telescope.

But of course a group of small telescopes attached together would not automatically mean they would be equivalent to a large single telescope.

For that what is called "Aperture synthesis" can be used (https://en.wikipedia.org/wiki/Aperture_synthesis).

Or another solution could be making each unit a "Planar Fourier capture array" (https://en.wikipedia.org/wiki/Planar_Fourier_capture_array).

## Thursday, March 30, 2017

### Virtual Particles

According to Quantum Mechanics, sub-atomic particles called "virtual" particles constantly pop in and out of existence in space-time everywhere.

They are not directly detectable as they appear and disappear extremely fast.

Is it possible that they always get created as pairs of particle-antiparticle?

And if so, is it possible one particle in each pair always has negative mass/energy so that when they destroy each other back,

they completely cancel out and so they do not create (gamma) photons, unlike what happens when "real" particle-antiparticle pairs destroy each other?

They are not directly detectable as they appear and disappear extremely fast.

Is it possible that they always get created as pairs of particle-antiparticle?

And if so, is it possible one particle in each pair always has negative mass/energy so that when they destroy each other back,

they completely cancel out and so they do not create (gamma) photons, unlike what happens when "real" particle-antiparticle pairs destroy each other?

## Friday, March 24, 2017

### Dark Energy And Conservation Of Energy

If Dark Energy causing the expansion of universe

and unit volume of space-time must have a constant amount of zero point energy then how this is consistent with conservation of energy?

I read that one opinion is conservation of energy simply does not apply at cosmological scale.

(To me that seems quite unreasonable! So conservation is okay everywhere in smaller scales but somehow gets broken in bigger scales? Isn't that kind of like if you add up a really big number of zeros and you get a total different from zero?)

Another opinion seems that (if I understood correctly) gravitational energy is negative and as galaxies get farther away from each other

it gets more negative, so there is an energy loss, and that provides the energy needed for the expansion (and also photons in the universe lose energy as their wavelengths increase?).

(To me this logic seems like chicken-and-egg problem!)

(Also I would think that, compared to energy needed to create new space-time for expansion of the universe, energy coming from gravitational binding energy should be minuscule!)

My opinion is Dark Energy maybe a (very different) kind of particle that creates new space-time cells (each size of Planck length) when it decays. And if so that means total amount of Dark Energy in the universe must be higher in the past and lower in the future

(which maybe possible to prove/disprove with astronomical observations). (Isn't it also imply space-time maybe some kind of fluid/gas?)

Also if Dark Energy runs out someday, is that mean universe would end in a Big Crunch? How all space-time created previously would get destroyed back? For that to happen wouldn't Black Holes need the ability to pull back and destroy space-time?

(Isn't it also imply space-time maybe some kind of fluid/gas?)

Also if Dark Energy causing the expansion of universe why we still need inflation?

Isn't it possible Dark Energy was expanding universe since Big Bang by keep creating space-time?

and unit volume of space-time must have a constant amount of zero point energy then how this is consistent with conservation of energy?

I read that one opinion is conservation of energy simply does not apply at cosmological scale.

(To me that seems quite unreasonable! So conservation is okay everywhere in smaller scales but somehow gets broken in bigger scales? Isn't that kind of like if you add up a really big number of zeros and you get a total different from zero?)

Another opinion seems that (if I understood correctly) gravitational energy is negative and as galaxies get farther away from each other

it gets more negative, so there is an energy loss, and that provides the energy needed for the expansion (and also photons in the universe lose energy as their wavelengths increase?).

(To me this logic seems like chicken-and-egg problem!)

(Also I would think that, compared to energy needed to create new space-time for expansion of the universe, energy coming from gravitational binding energy should be minuscule!)

My opinion is Dark Energy maybe a (very different) kind of particle that creates new space-time cells (each size of Planck length) when it decays. And if so that means total amount of Dark Energy in the universe must be higher in the past and lower in the future

(which maybe possible to prove/disprove with astronomical observations). (Isn't it also imply space-time maybe some kind of fluid/gas?)

Also if Dark Energy runs out someday, is that mean universe would end in a Big Crunch? How all space-time created previously would get destroyed back? For that to happen wouldn't Black Holes need the ability to pull back and destroy space-time?

(Isn't it also imply space-time maybe some kind of fluid/gas?)

Also if Dark Energy causing the expansion of universe why we still need inflation?

Isn't it possible Dark Energy was expanding universe since Big Bang by keep creating space-time?

Subscribe to:
Posts (Atom)