20170628

A Proposal for Solving P versus NP Problem

I don't know if this idea for solving P versus NP problem was suggested by anyone before or not:

I think history implies there maybe no general polynomial time algorithm for solving NP-complete problems.
But if there is no single algorithm then how about m algorithms?
Then the question is this: m is finite (P=NP) or infinite (P<>NP) (or this maybe better: P<NP)?

Now imagine that each of those m (heuristic?) algorithms can efficiently solve maybe an infinite number of cases.
But each always leave out an also infinite number of cases.
So the question is, if we keep designing new (heuristic) algorithms for the cases left out by the previous algorithms,
would we need to design an infinite number of algorithms or not?

And if finding the answer theoretically is not possible, cannot we search for it experimentally?

(If all NP-complete problems can be converted to each other (in polynomial time),
and we have lots of different (heuristic) algorithms for each,
is not that mean, we could see all those algorithms as different members of the set of m algorithms we seek?)

Assume N is the number of elements in the input problem for a particular NP-complete problem we chose.
Assume we started with N = 1 and increasing the N one by one.
Assume for each N, we enumerated all possible problem setups (initial conditions)
and tried out each of the (heuristic) algorithms we have (so far) on each of those problem setups.
Imagine we filtered out all problem setups which we could solve efficiently (in polynomial time) by at least 1 (heuristic) algorithm we have.

Obviously if for any N, there are still any setups which we have no algorithm for, that would mean we don't have the full solution (for sure)
and so we need at least one more algorithm to solve those cases.
(So our search would end at that point, unless/until we find the algorithm(s) we need to solve all remaining cases for that N.)

The goal of this experimental search would be to record M (number of algorithms) for each (increasing) N.
And try to find of what is the general trend for M.
(Is it increasing linearly or exponentially or changing some other way we could recognize.)

But isn't it that, heuristic algorithms we have, give us near optimal solutions, not optimal ones for each input case?
(But isn't still there are input cases they give optimal solutions which we could use?)
Even if so we could also accept near optimal solutions (within a certain (error) threshold (percentage)) for each input setup case.
But then the solution (final set of m (heuristic) algorithms) would be a lesser form of solution (for P vs NP problem) even if we find it.

But also if we have the solution (final set of m (heuristic) algorithms where each is polynomial time),
can we really say we have a general polynomial time solution?
For any given setup case of the problem, how we can select any one of m algorithms we have that can solve it (in polynomial time)?
That selection algorithm itself (if exist) is polynomial time or not?

20170624

What Black Holes Are Made Of?

I do not think Black Holes actually contain a singularity (a point of infinite density) at their center.
And so if they are made of particles then there is really only one option:
They must be made of so called Planck particles (which are the smallest and most energy dense particles theoretically).
Planck particles are thought to be extremely unstable but just consider that neutrons are also unstable but they are stable in a neutron star.

I think neutron stars are must be in some kind of fluid state (consider how two neutron stars would/could merge), so Black Holes also could be similar.

20170610

What is the mechanism of Hawking Radiation?

For a long time I thought a pair of virtual particles gets created near event horizon (due to uncertainty principle).
Normally they would quickly destroy each other and they never create flashes of gamma photons because always one of the twin particles has positive and the other has negative mass/energy.
But if sometimes a particle with negative mass/energy falls into event horizon the other could fly away as radiation. 
So the Black Hole would lose mass/energy over time.
I think this does not make sense because, wouldn't it also sometimes possible the particle with positive mass/energy fall into the Black Hole and cancel the mass/energy losses?

Another explanation I saw suggested the particle-anti-particle pairs created by gravitational energy near the event horizon,
(because the gravitational energy there is higher than (any kind of?) pair production energy), with both particles having positive mass/energy, one particle escapes as radiation.
And I think another explanation I saw suggested also similar pair production but later matter-anti-matter annihilation and production of outgoing (gamma) photons.
But if so, isn't Hawking Radiation should happen much much faster? (Because of the perfect efficiency of pair production.)

How about this explanation?
Imagine instead of pair production (twin particles with both positive mass/energy), virtual pair production (twin particles with one positive one negative mass/energy).
The particle with positive mass/energy attracted towards the Black Hole.
The particle with negative mass/energy repulsed away from the Black Hole.
The particles with positive energy sometimes create particle-anti-particle annihilations between themselves and create outgoing (gamma) photons.
But if so why the particles with negative energy could not do the same between themselves, create outgoing (gamma) photons with negative energy? (So that energy losses and gains would cancel out.)

20170605

Automated Stock Trading Methods

Single stock based:

Sell when the stock is up N percent and buy it back when it is down N percent?

Sell when a short term moving average crosses down a long term moving average and buy it back when the short term moving average crosses up the long term moving average?

Sell when the price-earnings ratio is up N percent and buy it back when it is down N percent?
(Multiple indicators similar to PE ratio can be multiplied together and can be used in similar way too.)

Multi stock based: (More suitable for index funds?)

Multiple indicators similar to PE ratio can be multiplied together and can be used to calculate a score for each different stock.
(If an (normalized) indicator is higher the better kind then multiply, lower the better kind then divide.)
Then sell N worst ones and buy N best ones every M days?

20170603

Super-intelligence vs Genius

Common criticism of IQ tests is that they are unreliable.
(I don't know how many times I saw media declaring a (super-intelligent) child a "genius" who is smarter than even Einstein.)
Why people who get really high scores in IQ tests don't become new Einsteins?
Obviously IQ tests are highly accurate for predicting academic success but not much else.
What are we missing?

I think first we should look at what is the difference between super-intelligence and genius.
Super-intelligent people are super-fast learners and super-fast problem solvers/calculators (like a human-computer hybrid).
On the other hand, it is well-known that Einstein had great difficulty learning new math for his whole life.
(I think if we analyze famous scientists/inventors in history we would find some are genius and some are super-intelligent people.)

Think about the difference between:
intelligence vs wisdom
book-smart vs street-smart
problem solving ability vs creativity
problem solving ability vs reasoning (and argumentation) ability
problem solving ability vs a strong sense of justice

So I think intelligence and genius (wisdom) are actually separate mental properties and they need to be measured separately.