It’s not every day that we come across a paper that attempts to redefine reality. A novel theory has been proposed that the universe could be a sort of neural network — an interconnected computatioral network of various nodes — equivalent to biological neurons — that a system similar in structure to the human brain operates throughout the universe at the fundamental level.
This controversial notion has been proposed by a physics professor at the University of Minnesota at Duluth named Vitaly Vanchurin as a way to reconcile areas of so-called ‘classical’ physics with those of quantum mechanics — a long-standing conundrum in physics. ‘We are not just saying that artificial neural networks can be useful for analyzing physical systems, or for discovering physical laws — we are saying that this is how the world around us actually works,’ Professor Vanchurin wrote. Bold as it may sound, he continues: ‘It could be considered as a proposal for the theory of everything, and as such it should be easy to prove it wrong.’
‘All that is needed is to find a physical phenomenon which cannot be described by neural networks. Unfortunately, this is easier said than done.’ When considering the workings of the universe on a large scale, physicists use a particular set of theories as tools. These are tools of ‘classical mechanics’ — built upon Newton’s laws of motion — and Einstein’s theories of relativity, which explain the relationship between space and time, and how mass distorts the fabric of spacetime to create gravitational effects.
To explain phenomena on the atomic and subatomic scales, however, physicists have found that the universe is better explained by theory of ‘quantum mechanics’. In this theory, quantities like energy and momentum are restricted to having discrete, non continuous values (known as ‘quanta’), all objects have properties of both particles and waves, and measuring any system changes it.
This last point, the essence of Heisenberg’s ‘uncertainty principle’, means that certain linked properties — such as an object’s position and velocity — cannot both be precisely known at the same time, bringing probabilities into play.
Vanchurin's controversial notion has been proposed as a way to reconcile areas of so-called ‘classical’ physics (including general relativity, which explains how mass and energy distort spacetime to create gravitational effects) with those of quantum mechanics. This ‘quantum gravity problem’ has been a long-standing hurdle in physics. While the two theories explain the universe very well on their own scales, physicists have long struggled to reconcile them together into a single universal theory.
For the two theories to mesh, gravity — described by general relativity as the curving of spacetime by matter/energy — would likely need to be made up of quanta and therefore have its own elementary particle, the graviton. Unfortunately, the effects generated by a single graviton on matter would be extraordinarily weak — making theories of quantum gravity seemingly impossible to test and ultimately determine which, if any, are correct.
Instead of trying to reconcile general relativity and quantum mechanics into one fundamental universal theory, the neural network idea suggests that the behaviors seen in both theories emerge from something much deeper. In his study, Professor Vanchurin set out to create a model of how neural networks work — in particular, in a system with a large number of individual nodes. He says that, in certain conditions — near equilibrium — the learning behavior of a neural network can be approximately explained with the equations of quantum mechanics, while further away the laws of classical physics come into play instead. ‘The idea is definitely crazy, but perhaps crazy enough to be true. That remains to be seen,’ he added.
In addition, he explained, the theory could account for so-called ‘hidden variables’ — unknown properties of objects proposed by some physicists to explain away the uncertainty inherent in most theories of quantum mechanics.
‘In the emergent quantum mechanics which I considered, the hidden variables are the states of the individual neurons and the trainable variables - such as bias vector and weight matrix - are quantum variables,' Professor Vanchurin said.
In such a neural network, everything - from particles and atoms to cells and beyond - would emerge in a process analogous to evolution/natural selection, Professor Vanchurin has suggested.
‘There are structures of the microscopic neural network which are more stable and there are other structures which are less stable,’ he told Futurism.
‘The more stable structures would survive the evolution, and the less stable structures would be exterminated.’ ‘On the smallest scales I expect that the natural selection should produce some very low complexity structures such as chains of neurons, but on larger scales the structures would be more complicated.
‘I see no reason why this process should be confined to a particular length scale and so the claim is that everything that we see around us — e.g. particles, atoms, cells, observers, etc. — is the outcome of natural selection.’
As to whether the universe-as-neural-network theory has merit — the rest of the physics community appears unlikely to be on board. As Professor Vanchurin concedes, ’99 percent of physicists would tell you that quantum mechanics is the main theory and everything else should somehow emerge from it’ — a tenet at odds with the notion that it is not fundamental. Most experts in the field of both physics and machine learning have expressed skepticism over the idea, declining to comment on the record. Given that this is exactly how most revolutionary ideas begin, the proposal certainly merits further consideration.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.