#SearCh<3Bar#

Google

Thursday, August 2, 2007

What We Don't Know

What’s at Earth’s Core?
We know that at the center of the planet, about 4,000 miles down, sits a solid ball of iron the size of the moon. We also know that we’re standing on about 1,800 miles of rock, which forms Earth’s crust and mantle. But what’s in between the mantle and the iron ball? A churning ocean of liquid of some sort, but scientists aren’t certain what it’s made of or how it reacts to the stuff around it.
We’re confident there’s a lot of iron in this ocean. But what else? Based on what researchers understand about the pressure, temperature, and density of materials down there, some maintain that the core also contains lots of hydrogen and sulfur. Raymond Jeanloz of UC Berkeley believes that another component is oxygen, which comes from rocks in the part of the mantle that borders the liquid core.
Knowing more about the molten concoction would give scientists clues about how Earth formed and how heat and convection affect plate tectonics. More information could help solve another mystery, too: whether, as many researchers suspect, the inner core is growing. If so, it could eventually overtake the molten metal surrounding it, throwing off Earth’s magnetic field.

Is time an illusion?
Plato argued that time is constant - it’s life that’s the illusion. Galileo shrugged over the philos-ophy of time and figured out how to plot it on a graph so he could get on with the important physics. Albert Einstein said that time is just another dimension, a fourth one to go along with the up-down, side-side, forward-back we move through every day. Our understanding of time, Einstein said, is based on its relationship to our environment. Weirdly, the faster you travel, the slower time moves. The most radical interpretation of his theory: Past, present, and future are merely figments of our imagination, constructs built by our brains so that everything doesn’t seem to happen at once.

Einstein’s conception of unified spacetime works better on graph paper than in the real world. Time isn’t like those other dimensions - for one thing, we move only one way within it. “What’s needed is not to make the notion of time and general relativity work or to go back to the notion of absolute time, but to invent something radically new,” says Lee Smolin, a physicist at the Perimeter Institute in Waterloo, Ontario. Somebody is going to get it right eventually. It’ll just take time.



How does a fertilized egg become a human?
Imagine that you place a 1-inch-wide black cube in an empty field. Suddenly the cube makes copies of itself - two, four, eight, 16. The proliferating cubes begin to form structures - enclosures, arches, walls, tubes. Some of the tubes turn into wires, PVC pipes, structural steel, wooden studs. Sheets of cubes become wallboard and wood paneling, carpet and plate-glass windows. The wires begin connecting themselves into a network of immense complexity. Eventually, a 100-story skyscraper stands in the field.

That’s basically the process a fertilized cell undergoes beginning with the moment of conception. How did that cube know how to make a skyscraper? How does a cell know how to make a human (or any other mammal)? Biologists used to think that the cellular proteins somehow carried the instructions. But now proteins look more like pieces of brick and stone - useless without a building plan and a mason. The instructions for how to build an organism must be written in a cell’s DNA, but no one has figured out exactly how to read them.


What happened to the Neanderthals?
They were our cousins, our hominid cousins. They looked like us, they walked like us, they may have even thought like us. So why did the Neanderthals disappear, while we Homo sapiens dug in and stayed?

Ever since the first Neanderthal bones were discovered 150 years ago in Germany’s Neander Valley, paleoanthropologists have sought to understand what could possibly have destroyed the once-thriving and widely dispersed species of prehistoric human. By most measures, the Neanderthals were the equal of our direct ancestors, the fully modern out-of-Africa characters often called Cro-Magnons, with whom the Neanderthals coexisted for thousands of years. Like our forebears, Neanderthals were supple sojourners, happily colonizing parts of Africa, Europe, and Asia. They stood upright, skillfully sculpted and wielded stone tools, and buried their dead with pomp and hope. They were slightly larger and more muscular than their Cro-Magnon counterparts, and their brains were bigger, too. Yet by about 30,000 years ago, the Neanderthals had vanished, leaving Cro-Magnons as the sole survivors of the tangled Hominina tribe. Moreover, while Neanderthals may well have been capable of interbreeding with Cro-Magnons, recent DNA analysis has revealed no signs that such Stone Age Capulet-Montague mergers occurred.

Some scientists have attributed the Neanderthals’ demise to chronic disease, pointing out that many Neanderthal skeletal remains show signs of arthritis and other bone disorders. Other people have wondered whether genocide was to blame. Perhaps the Cro-Magnons systematically exterminated their competitors, just as chimpanzees have been observed hunting down and killing every last member of a neighboring chimp troupe.

Another, more recent, hypothesis is that Homo sapiens outcompeted Homo neanderthalensis because of a difference in their economic systems. Reporting in the December issue of Current Anthropology, Steven Kuhn and Mary Stiner of the University of Arizona wrote that the archaeological record suggests all Neanderthals - male, female, adult, child - focused their efforts on “obtaining large terrestrial game.” In other words, they were all hunters. The Cro-Magnons, by contrast, appear to have divided labor along more or less sexual lines, with men doing most of the big-game killing, women and children gathering tubers and other plant foods, and everybody sharing the flesh and fruits of their efforts. By adopting this sort of specialization of labor, the researchers speculate, Homo sapiens likely proved more efficient and flexible than Neanderthals and were able to expand their population more rapidly.

In other words, at least according to this new theory put forward by Stiner and Kuhn, the Neanderthals weren’t felled by a pathogen or a primordial Slobodan Milosevic. They were done in by the bedrock family values of Fred and Wilma Flintstone.


Why do we sleep?
It’s a catchy phrase: You snooze, you lose. But cutting out those 40 winks would be a bad idea. All mammals sleep, and if they’re deprived of shut-eye they die [PDF] - faster than if they’re denied food. But no one really knows why.

Obviously, sleep rests the body. But watching TV does that, too. The answer must lie in the noggin. One leading theory says that while we’re awake, a substance builds up in the brain (or gets depleted) and sleep removes (or replenishes) it. That makes sense. For part of the night, the brain idles in an energy-conserving state called slow-wave sleep. Freed from the duties of consciousness, it can focus on cleanup.

The problem with this idea is that another portion of each night, about a quarter, is given to REM sleep [PDF], during which the brain is anything but idle. REM stands for rapid eye movement, and it corresponds with vivid dreams, suggesting that it plays a role in consolidating memories. But there’s probably more to it: Though antidepressants suppress REM sleep, patients taking them suffer no memory impairment.

In any case, it’s clear that pillow time serves a critical purpose. Bad things - like some 100,000 traffic accidents a year, not to mention uncounted instances of calling your spouse by your ex’s name - happen when we don’t get enough z’s. At some point, someone’s going to have to dream up a reason.

Where did life come from?
Natural selection explains how organisms that already exist evolve in response to changes in their environment. But Darwin’s theory is silent on how organisms came into being in the first place, which he considered a deep mystery. What creates life out of the inanimate compounds that make up living things? No one knows. How were the first organisms assembled? Nature hasn’t given us the slightest hint.

If anything, the mystery has deepened over time. After all, if life began unaided under primordial conditions in a natural system containing zero knowledge, then it should be possible - it should be easy - to create life in a laboratory today. But determined attempts have failed. International fame, a likely Nobel Prize, and $1 million from the Gene Emergence Project await the researcher who makes life on a lab bench. Still, no one has come close.

Experiments have created some basic materials of life. Famously, in 1952 Harold Urey and Stanley Miller mixed the elements thought to exist in Earth’s primordial atmosphere, exposed them to electricity to simulate lightning, and found that amino acids self-assembled in the researchers’ test tubes. Amino acids are essential to life. But the ones in the 1952 experiment did not come to life. Building-block compounds have been shown to result from many natural processes; they even float in huge clouds in space. But no test has given any indication of how they begin to live - or how, in early tentative forms, they could have resisted being frozen or fried by Earth’s harsh prehistoric conditions.

Some researchers have backed the hypothesis that an unknown primordial “soup” of naturally occurring chemicals was able to self-organize and become animate through a natural mechanism that no longer exists. Some advance the “RNA first” idea, which holds that RNA formed and lived on its own before DNA - but that doesn’t explain where the RNA came from. Others suppose life began around hot deep-sea vents, where very high temperatures and pressures cause a chemical maelstrom. Still others have proposed that some as-yet-unknown natural law causes complexity - and that when this natural law is discovered, the origin of life will become imaginable.

Did God or some other higher being create life? Did it begin on another world, to be transported later to ours? Until such time as a wholly natural origin of life is found, these questions have power. We’re improbable, we’re here, and we have no idea why. Or how.



How can observation affect the outcome of an experiment?
Paging Captain Obvious: To perform a legitimate experiment, scientists must observe the results of a system in motion without influencing those results. Turns out that’s harder than it sounds. In 1927, German physicist Werner Heisenberg discovered that in the Wonderland-like subatomic realm, it is impossible to measure position and momentum simultaneously. “In an attempt to observe an electron or other subatomic particle using light, very short wavelengths of light are required,” says David Cassidy, a science historian and Heisenberg expert at Hofstra University. “But when that light hits the electron, it knocks it all over the place like a billiard ball.” This can become a serious issue when you’re working with the kind of focused, high-intensity beams found in, say, particle accelerators. “The more precise the momentum of the beam particles,” Cassidy says, “the more difficult it becomes to focus the beam.”

The real problem, though, is what this so-called observer effect does to reality. Do an experiment to find the fundamental unit of light and you find particles called photons. But change the conditions of the experiment and you get waves. Physicists have no problem with the cognitive dissonance of this “wave-particle duality.” But... so... what’s light made out of, really? The dichotomy raises the mind-boggling prospect that unless we observe an event or thing, it hasn’t really happened, that all possible futures are quantum probability functions waiting for someone to notice them - trees falling unheard in a forest. Maybe this article wasn’t even here until you turned to this page.



How do entangled particles communicate?
One of the zanier notions in the plenty zany world of quantum mechanics is that a pair of subatomic particles can sometimes become “entangled.” This means the fate of one instantly affects the other, no matter how far apart they are. It’s such a bizarre phenomenon that Einstein dissed the idea in the 1930s as “spooky action at a distance,” saying it showed that the developing model of the atomic world needed rethinking.

But it turns out that the universe is spooky after all. In 1997, scientists separated a pair of entangled photons by shooting them through fiber-optic cables to two villages 6 miles apart. Tipping one into a particular quantum state forced the other into the opposite state less than five-trillionths of a second later, or nearly 7 million times faster than light could travel between the two. Of course, according to relativity, nothing travels faster than the speed of light - not even information between particles.

Even the best theories to explain how entanglement gets around this problem seem preposterous. One, for example, speculates that signals are shot back through time. Ultimately, the answer is bound to be unnerving: According to a famous doctrine called Bell’s Inequality, for entanglement to square with relativity, either we have no free will or reality is an illusion. Some choice.

Why do placebos work?
Tor Wager makes his living inflicting pain. As a psychologist at Columbia University, he zaps people with brief electric surges in order to study the placebo effect, one of the most mysterious phenomena in modern medicine. In one recent experiment, Wager and a group of colleagues delivered harsh shocks to the wrists of 24 test subjects. Then the researchers rubbed an inert cream on the subjects’ wrists but told them it contained an analgesic. When the scientists delivered the next set of shocks, eight of the subjects reported experiencing significantly less pain.

The idea that an innocuous lotion could ease the agony of an electric shock seems remarkable. Yet placebos can be as powerful as the best modern medicine. Studies show that between 30 and 40 percent of patients report feeling better after taking dummy pills for conditions ranging from depression to high blood pressure to Parkinson’s. Even sham surgery can work marvels. In a recent study, doctors at Houston’s Veterans Affairs Medical Center performed arthroscopic knee surgery on one group of patients with arthritis, scraping and rinsing their knee joints. On another group, the doctors made small cuts in the patients’ knees to mimic the incisions of a real operation and then bandaged them up. The pain relief reported by the two groups was identical. “As far as I know, the placebo effect has never raised the dead,” says Howard Brody, a professor at the University of Texas Medical Branch and author of a book on the subject. “But the vast majority of medical conditions respond to placebo at least to some degree.”

How do placebos have such an effect? Nobody knows. Studies have shown that our brains can release chemicals that mimic the activity of morphine when we’re treated with placebo analgesics. But only lately have researchers begun to pin down the underlying physiological mechanisms. In his groundbreaking electrical-shock experiment, Wager used functional MRI to examine images of the brain activity of his subjects. When a person knew a painful stimulus was imminent, the brain lit up in the prefrontal cortex, the region used for high-level thinking. When the researchers applied the placebo cream, the prefrontal cortex lit up even brighter, suggesting the subject might be anticipating relief. Then, when the shock came, patients showed decreased activity in areas of the brain where many pain-sensitive neurons lead.

One day, this sort of research could point toward new treatments that harness the mind to help the body. Until then, doctors are divided on the ethics of knowingly prescribing placebos. Some think it’s shady to perform mock surgery or offer a patient pills that contain no active ingredients. Yet the best doctors have always employed one form of placebo: Studies show that empathy from an authoritative yet caring physician can be deeply therapeutic. Maybe handing out the occasional sugar pill isn’t such a bad idea.



What is the universe made of?
Astronomers scouring the heavens with powerful telescopes can see objects that are billions of trillions of miles away. These observations have proven essential to piecing together a fairly refined picture of the history and evolution of the cosmos. Nevertheless, a gaping hole remains in our understanding of a basic question: What is the universe made of? For more than 100 years we’ve known about atoms, and over the past century or so we’ve gone further and identified atomic constituents like electrons and quarks, as well as their exotic cousins - neutrinos, muons, and the like. But there is now convincing evidence that these ingredients are a cosmic afterthought. Current data shows that if you weighed everything in existence, these familiar particles would amount to about 5 percent of the total. Most of the universe is composed of other stuff, which, with all of science’s deep insights, we’ve yet to identify.

How do we know this? Well, over the course of many decades, astronomers studied the motion of galaxies and the stars within them, and found that the gravity exerted by this luminous matter was insufficient to account for the way these heavenly bodies moved. Only by positing large amounts of additional matter that doesn’t give off light (visible, x-ray, infrared, or any other kind) and is thus invisible to telescopes, could the data be explained. Through detailed cosmological measurements, scientists also discovered that this so-called dark matter couldn’t be made of the same electrons, protons, and neutrons that make up everything with which we are familiar.

Then, in the late 1990s, two groups of astronomers, one led by Saul Perlmutter of the Lawrence Berkeley National Laboratory, the other by Brian Schmidt of the Australian National University, found something even stranger. Through observation of distant supernovas, these astronomers measured how the expansion rate of the universe has changed over time. Because of gravity’s relentless pull, most everyone expected that the expansion would be slowing. But the data from both groups showed the opposite. The expansion of the universe is speeding up. Something must be pushing outward, and luckily Einstein's general theory of relativity provides a ready-made candidate: A uniform, diffuse energy spread throughout space can act as an antigravity force. Since this energy gives off no light, it’s called dark energy.

Collectively, the observations establish that about 23 percent of the universe is dark matter and about 72 percent is dark energy. Everything else is squeezed into the remaining few percent.

Several experiments are now under way to identify dark matter. Scientists are searching for what they suspect is an exotic species of particle. Some studies are looking for clues by analyzing particles bombarding Earth from space; others, like the Large Hadron Collider, will analyze collisions between extremely fast-moving protons that have the potential to create dark matter in the lab. We are guardedly optimistic that we’ll be able to identify dark matter soon.

By contrast, the question of dark energy is wide open. What is its origin? What determined its quantity? Does the amount stay constant or vary? These are critical questions. Calculations show that if the amount of dark energy had been slightly larger, the universe would have blown apart so quickly that life as we know it could not exist.



What is the purpose of noncoding DNA?
A typical human cell contains more than 6 feet of tightly cornrowed DNA. But only about an inch of that carries the codes needed to make proteins, the day laborers of biology. What’s the other 71 inches?

It’s junk, Nobelist Sydney Brenner said after it was discovered back in the 1970s. The name stuck, but biologists have known for a while that the junk DNA must contain treasures. If noncoding DNA were just along for the ride, it would rapidly incorporate mutations. But long stretches of noncoding DNA have remained basically the same for many millions of years - they must be doing something.

Now scientists are starting to speculate that proteins, and the regular DNA that creates them, are just the nuts and bolts of the system. “They’re like the parts for a 757 jet sitting on the floor of a factory,” says University of Queensland geneticist John Mattick. The noncoding DNA is likely “the assembly plans and control systems.” Unfortunately, he concludes, because we’ve spent 30 years thinking of it as junk, we’re just now learning how to read it.

Will forests slow global warming - or speed it up?
Everyone knows that forests are good for the environment. By removing carbon dioxide - the principal cause of global warming - from the air, trees grow. And the bigger and more plentiful the trees, the more CO2 they sequester. This makes forests a helpful bulwark against climate change. But despite the best carbon-eating action of our flora, the planet is heating up. This raises the specter of a future in which, paradoxically, forests don’t reduce climate change but - as they are destroyed - make it worse.

We don’t know which way it will go, because we know so little about forests themselves. Scientists estimate that up to 50 percent of all species live in forest canopies - three-dimensional labyrinths largely invisible from the ground - but virtually no one can tell you what lives in any given cubic meter of canopy, at any height, anywhere in the world. We don’t even have names for the most common species of trees in the Amazon.

But scientists can readily foresee the way in which these carbon killers instead become dangerous carbon spewers. As the climate warms, many forests will become drier, putting the trees under stress. Typically, this sets the stage for huge outbreaks of insects, which can strip trees of their leaves, killing large numbers of them. Once dead, trees release their carbon into the air - already roughly 25 percent of the greenhouse gases pouring into the atmosphere come from forests that are burned or cut down. Further, if they no longer exist, forests can’t absorb CO2 anymore, and the bare ground that is exposed heats up faster - forests are like giant swamp coolers for the planet. Will this happen?

Hard to say. If we don’t know which insects are eating the leaves now, we can’t gauge how global warming will affect them or how they in turn might affect forests. “You can’t possibly answer more general questions about forests until you at least know what lives there,” says Margaret Lowman, canopy scientist at New College of Florida. “It’s more than just giving names to things. We need to know what’s common and what’s rare, and what these species are doing, before we can go to the next level, which is to try to see the interaction between forests and Earth’s climate.”



What happens to information in a black hole?
Inside a black hole, gravity is so intense that neither matter nor energy can escape. But in 1975, Cambridge physicist Stephen Hawking said that something does escape: random particles now known as “Hawking radiation.” So if black holes eat organized matter - chock-full of information - and then spit out random noise, where does the information go?

Hawking said it gets locked up inside as the black hole eventually evaporates, destroying the information in the process. Which creates a paradox. Because the rules of physics say information, like matter and energy, can’t be destroyed.

Hawking was confident. He convinced his super-genius counterpart at Caltech, physicist Kip Thorne, that he was right - but Thorne’s colleague John Preskill remained skeptical. So they made a bet: Hawking and Thorne said the singularity at the heart of a black hole destroyed information; Preskill said “nuh-uh.” Then, in 2004, Hawking reversed his position and decided that things that fall into a singularity aren’t lost; their information does leak out, though no one, except maybe Hawking himself, can explain why or how.

He presented Preskill with a baseball encyclopedia from which, presumably, information can be retrieved at will. Preskill accepted only grudgingly. “Even if you’re Stephen Hawking, it’s possible to be wrong twice,” he says.



What causes ice ages?
Scientists know that small-scale ice ages occur every 20,000 to 40,000 years and that massive ones happen every 100,000 years or so. They just don’t know why. The current working theory - first proposed in 1920 by Serbian engineer Milutin Milankovitch - is that irregularities in Earth’s orbit change how much solar energy it absorbs, resulting in sudden (well, geologically speaking) cooling. While this neatly fits the timing of short-term events, there’s still a big problem. Over the past few decades, studies have shown that orbital fluctuations affect solar energy by 1 percent or less - far too little to produce massive climate shifts on their own. “The mystery is, what is the amplification factor?” says University of Michigan geologist and climatologist Henry Pollack. “What takes a small amount of solar energy change and produces a large amount of glaciation?”

Studies of ice and seabed cores reveal that temperature rise and fall is heavily correlated with changes in greenhouse-gas concentrations. But it’s a chicken-and-egg problem. Are CO2 rises and falls a cause of climate change or an effect? If they are a cause, what initiates the change? Figuring this out could tell us a great deal about the current global warming problem and how it might be solved. But as Matthew Saltzman, a geologist at Ohio State puts it, “We need to know why greenhouse gases fluctuated in prehuman times, and we just don’t.”
- John Hockenberry, WIRED contributing editor

Collaborate: Edit this text at the Wired Wiki.


How does the brain calculate movement?
All of science, it seems, wants to know how brains give animals complex motor skills. Robotics, physics, neuro-physiology, and medicine are just a few of the disciplines studying the topic. The paradox is that brains - even large human brains - are notoriously slow by processing standards: Set your hand on a hot plate and it takes full milliseconds to feel the burn. So how does the same gooey substance simultaneously acquire visual data, calculate positional information, and gauge trajectory to let a lizard’s tongue snatch a fly, a dog’s mouth catch a Frisbee, or a hand catch a falling glass? “With the thousands of muscles in the body, the motor cortex clearly isn’t ‘thinking’ in any sense about movement,” says UC San Diego neuroscientist Patricia Churchland. According to Stanford University’s Krishna Shenoy, the brain seems to create an internal model of the physical world, then, like some super-sophisticated neural joystick, traces intended movements onto this model. “But it’s all in a code that science has yet to crack,” he says. Whatever that code is, it’s not about size. “Even a cat’s brain can modify the most complicated motions while executing them.”



Why do the poles reverse?
Almost 800,000 years ago, compasses would have pointed south. A little further back, they would have pointed north. Evidence for such reversals comes from lava flows and cracks in the ocean floor, places where newly formed rock makes a record of the magnetic polarity.

We know that as Earth spins, the liquid metal in its molten core churns, generating an electro-magnetic field. We also know that shifts in the movement of the core can alter the polarity of that field and that it takes about 7,000 years for the orientation to flip-flop once the process of reversal begins - something that happens on average two or three times every million years. But no one knows how it works. Some scientists believe the poles migrate slowly from one end to the other; some theorize that the magnetic field shuts down and then reemerges with opposite polarity.

As for what triggers the event, experts have suggested that a huge impact - say, a giant meteor - could create a disturbance in the core. But research by Gary Glatzmaier, a planetary science professor at UC Santa Cruz, shows that a violent catalyst isn’t needed. So why does pole reversal occur? “That’s like asking, why do hurricanes start?” he says. “Well, they’re always trying to, and sometimes the conditions are just right.”

How does the brain produce consciousness?
That slab of meat in your skull - a 3-pound walnut of wetware - somehow puts the you in you. Nobody really knows how. Philosophers since Plato have pondered the issue. And probing the relationship between mind and body was the central goal of psychology until behaviorists closed the door on mind in the early 20th century and focused on observable actions. But only recently have scientists tried to tackle consciousness, spurred by new tools like functional MRI and PET scans that can augment traditional clinical research by showing brain activity.

Already, however, these researchers find themselves haggling over familiar questions. Is consciousness merely wakefulness? No, we’re conscious when we dream. Is it our sense of personal identity? Yes, but surely it’s also the stream of words and images that runs through what William James called the “extended present,” the immediate workspace of our minds. It’s perception, but it’s also reflection - summoning up visual and verbal constructions, imaginary or real. It’s simulation, mentally walking ourselves through situations before we face them, learning and practicing, hoping to avert pratfalls.

No surprise, then, given this confusion, that scientific theories on consciousness are all over the map. Antonio Damasio, a neurologist and neuroscientist at the University of Southern California who studies brain-damaged patients, speculates that self-awareness evolved in humans as a regulatory mechanism, a way for the brain to understand what is going on with the body. He calls “the coming of the sense of self into the world of the mental” a “turning point in the long history of life.” Caltech’s Christof Koch, who studies vision as the starting point for mind, believes that people have specific “consciousness neurons.” And Bernard Baars of the Neurosciences Institute in San Diego suggests that consciousness is a controlling gateway to unconscious mechanisms such as working memory, word meanings, visual memory, and learning.

Some philosophers still argue that consciousness is too subjective to explain, or that it is the irreducible result of matter organized in a specific way. That philosophic black-boxing is probably more nostalgic than scientific, a clinging to the idea of a spirit or soul. Without that, after all, we’re just organisms - more complex, but no less predictable, than dung beetles. But scientists live to reduce the seemingly irreducible, and sentimentality is off-limits in the lab. Understanding consciousness means finding the biophysical mechanisms that generate it. Somewhere behind your eyes, that meat becomes the mind.



Why is fundamental physics so messy?
When the job description calls for reverse-engineering the universe, the pool of successful applicants will naturally include enough self-impressed overachievers to make second-degree ego burn a hazard of the trade. But even the leading researchers in theoretical particle physics, the most headstrong of the scientific elite, are humbled by their failure to figure out why the cosmos is such a mathematically elegant mess.

The equations themselves are lovely, describing how a baseball arcs parabolically between earth and sky or how an electron jumps around a nucleus or how a magnet pulls a pin. The ugliness is in the details. Why does the top quark weigh roughly 40 times as much as the bottom quark and, even worse, thousands of times more than the up quark and down quark combined? Maddeningly, the proton weighs almost, but not exactly, the same as its counterpart, the neutron. And wasn’t the electron enough? Did we really need its two fat cousins, the muon and the tau?

It’s as though some software engineer crafted a beautiful, bugless operating system - the laws of physics - and then fed it with random data, the output from a lava lamp, or moths bashing at a window screen. Garbage in, garbage out, generating the weird, starry heap of a universe we call home.

Optimists hope the randomness is actually pseudo-random - complexity in disguise, with The Algorithm at the core of everything, churning out the details, demanding that things be what they be.

The bet is that this codex lies tangled somewhere inside superstring theory. Deep within the quarks, face-to-face with the universal machine language, are tiny snippets of something - no one really knows what - called strings and branes. They wiggle around in their 10 or so dimensions and conjure up the universe, this universe, with a spec sheet about as symmetrical as a bingo card.

Superstring theory turns out to be more complex than the universe it is supposed to simplify. Research suggests there may be 10500 universes... or 10500 regions of this universe, each ruled by different laws. The truths that Newton, Einstein, and dozens of lesser lights have uncovered would be no more funda-mental than the municipal code of Nairobi, Kenya, or Terre Haute, Indiana. Physicists would just be geographers of some accidental terrain.

Things might look brighter next year, when the Large Hadron Collider - the biggest scientific project ever - should be running full blast, using superconducting magnets to smash matter hard enough to break through the floor of reality. Physicists hope that down in the cellar they’ll find the Higgs boson - skulking in the dark like a centipede, furtively giving the other particles their variety of masses.

Or maybe they’ll just find more junk. If so, the search will probably be over for now, placed on hold for the next civilization with the temerity to believe that people, pawns in the ultimate chess game, are smart enough to figure out the rules.

How doth human language evolve?
Lots of animals make noise; much of it even conveys information. But for sheer complexity, for developed syntax and grammar, and for the ability to articulate abstract concepts, you can’t beat human speech. MIT linguist Noam Chomsky and Harvard experimental psychologist Steven Pinker say it’s genetic. Pinker theorizes that language emerged about 200,000 years ago, when early humans who were efficient communicators were more likely to pass on their genes. (Less-than-efficient communicators were more likely to scream incoherently - instead of imparting an escape plan - before being devoured by a saber-toothed tiger.) A little more evidence: People with particular genetic defects have specific difficulties with speech and grammar.

Other scientists argue that spoken words are actually an outgrowth of other human skills, such as planning, memory, and logic. “There is no ‘language gene,’” says Luc Steels, a computer scientist at the Free University of Brussels in Belgium. “Language was a cultural breakthrough, like writing.” Steels built robots with a set of general intelligence traits but without a language module in their software, and they developed grammar and syntax systems similar to those of human language.

Blame neuroscientists for the controversy. The parts of the brain thought to be responsible for language are as well - understood as the rest of the brain, which is to say: not so much.

Why can’t we predict the weather?
A few years ago, weather forecasts were totally unreliable beyond a couple of days; today better computer models make them accurate as far as a week out. That’s fine for figuring out how to pack for a business trip or whether you need to rent a big tent for the wedding reception. The trouble starts when you want to build a computer model to predict the weather over decades or centuries. In 1961, a meteorologist named Edward Lorenz was running a computerized weather simulation and decided to round a few decimal places off one of the parameters. The tiny tweak completely changed weather patterns. This became known as the butterfly effect: A butterfly flapping its wings in Brazil sets off a tornado in Texas. Lorenz’s shortcut helped launch chaos theory and sparked an obsession among meteorologists with feeding as-perfect-as-possible data into their models in an attempt to lengthen their forecast window.

But even refining precision doesn’t get us to long-term prediction. For that, climatologists need to understand boundary conditions, like the interactions between the atmosphere and the oceans. The goal, says Louis Uccellini, director of the National Centers for Environmental Prediction, is to model Earth as a single climate system. Then we can figure out what’ll happen to it next.

Why don’t we understand turbulence?
An airplane’s sudden loss of lift, liquid fuel igniting inside a rocket engine, blood clotting in an artificial heart valve - turbulence can be deadly. When a liquid or gas moves smoothly, it’s easy to go with the flow. But change certain conditions - speed, viscosity, surrounding space - and the orderly current dissolves into whirling chaos. If we could model the physics of turbulent flow in software, we could use the model’s output to design safer, more-energy-efficient machines.

The trouble is complexity. When a stream of water or air goes turbulent, groups of molecules form vortices of widely varying sizes that interact in seemingly random ways. To determine the outcome, we’d have to measure the initial conditions to an impractical degree of precision. And in any case, tracking a zillion particles is beyond the reach of any conceivable computer.

If we can’t predict how a given turbulent system will behave, at least we can simplify it enough to zero in on statistical likelihoods. The key is the transition zone: the precise spot where smooth flow breaks down. Here, chaos theory describes the proliferation of whorls, while the science of cellular automata, which imposes a grid over reality, reduces complex interactions to a limited number of simple equations. These mathematical tricks don’t bring turbulence to heel, but they do get engineers close enough to make reasonably sure your plane touches down on time... and in one piece.

Is the universe actually made of information?
Humans have talked about atoms since the time of the ancients, and ever-smaller fundamental particles of matter followed. But no one even conceived of bits until the middle of the 20th century. The bit is a fundamental particle, too, but of different stuff altogether: information. It is not just tiny, it is abstract - a flip-flop, a yes-or-no. Now that scientists are finally starting to understand information, they wonder whether it’s more fundamental than matter itself. Perhaps the bit is the irreducible kernel of existence; if so, we have entered the information age in more ways than one.

The quantum pioneer John Archibald Wheeler, perhaps the last surviving collaborator of both Albert Einstein and Niels Bohr, poses this conundrum in oracular monosyllables: “It from bit.” For Wheeler, it is both an unanswered question and a working hypothesis, the idea that information gives rise, as he writes, to “every it - every particle, every field of force, even the spacetime continuum itself.” This is another way of fathoming the role of the observer, the quantum discovery that the outcome of an experiment is affected, or even determined, when it is observed. “What we call reality,” Wheeler writes coyly, “arises in the last analysis from the posing of yes-no questions.” He adds, “All things physical are information-theoretic in origin, and this is a participatory universe.”

Earlier generations would not have been able to imagine information as so... meaty. How could this abstract quality be substantial enough - enough of a thing - to be the substrate of all existence? Its newly powerful status began to emerge in 1948, when Claude Shannon at Bell Labs invented information theory. His scientific framework introduced the bit, defined concepts like signal and noise, and pointed the way to modems and compact discs, cell phones and cyberspace, Moore’s law, Metcalfe’s law, and a world of silicon valleys and alleys.

Now the whole universe is seen as a computer - a cosmic processor of information. When photons and electrons and other particles interact, what are they really doing? Exchanging bits, transmitting quantum states. Every burning star, every silent nebula, every particle leaving its ghostly trace in a cloud chamber is an information processor. The universe computes its own destiny.

How much does it compute? How fast? How big is its total information capacity, its memory space? What is the link between energy and information - the energy cost of flipping a bit?

These are hard questions, but they are not mystical or metaphorical. Physicists and quantum - information theorists are using the bit to look anew at the mysteries of thermodynamic entropy and at those notorious information swallowers, black holes. They’re doing the math and producing tentative answers. To the small questions, that is.

For Wheeler, the big question of which comes first, the material universe or information, is a way of posing an even bigger question: “How come existence?” How does something arise from nothing? And that, it’s safe to say, is a question science cannot answer.


Why do some diseases turn into pandemics?
A pandemic - a transnational outbreak of disease - is really just a pathogen on a hot streak. After all, germs want what we all want, evolution-wise: to spread their genes. Success in the germ world means infecting a whole lot of people, reproducing, then infecting a whole lot more. The efficiency with which a microbe pulls that off depends on how the bug works and how the target - us - works. HIV, for example, loves a promiscuous-but-prudish population; human beings like to have sex but don’t like to talk about condoms. The Ebola virus, on the other hand, hasn’t found victims who exchange fluids with enough other people before dying (horribly). So changes in culture like jet airplane travel can make a population more vulnerable to a previously contained disease. And changes in a germ - say, if avian influenza H5N1 acquires the right genes from the human version - can be like spinach to Popeye. But no one knows how to predict when either of those things might happen. So don’t forget to wash your hands. A lot.
- Elizabeth Svoboda

Can mathematicians prove the Riemann hypothesis?
In the early 1900s, German mathematician David Hilbert said that if he awakened after 1,000 years of sleep, the first question he’d ask would be: Has the Riemann hypothesis been proven?

It’s been only 100 years, but the answer so far is no. Put forward by Bernhard Riemann in 1859, the hypothesis would establish the distribution of zeroes on something called the Riemann zeta function. That, in turn, correlates to the intervals between prime numbers.

Prime numbers (numbers that can be divided only by 1 and themselves: 2, 3, 5, et cetera) are the building blocks of mathematics, because all other numbers can be arrived at by multiplying them together (e.g., 150 = 2 x 3 x 5 x 5). Understanding the primes sheds light on the entire landscape of numbers, and the greatest mystery concerning primes is their distribution. Sometimes primes are neighbors (342,047 and 342,049). Other times a prime is followed by desert of nonprimes before the next one pops up (396,733 and 396,833).

Making sense of this bizarre arrangement would offer a base from which to solve numerous other long-standing math problems and could affect related fields, like quantum physics. Until they know whether it’s true, though, mathematicians can’t use Riemann. Princeton mathematician Peter Sarnak put it this way: “Right now, when we tackle problems without knowing the truth of the Riemann hypothesis, it’s as if we have a screwdriver. But when we have it, it’ll be more like a bulldozer.” Which is why the Riemann hypothesis has been named one of the Clay Millennium prize problems: Whoever proves (or disproves) it gets $1 million.

Why do we die when we do?
When asked why things die, physicists don’t hesitate: It’s the second law of thermodynamics. Everything, be it mineral, plant, or animal, a Lexus or a mitral valve or a protein in a cell wall, eventually breaks down. What that looks like in humans - what exactly it is that makes us age - is a question for biologists. It’s DNA damage by free radicals, maybe, or shrinkage of the caps on chromosomes. Telomeres, as they’re called, get smaller with each cell division. When they hit a certain length: apoptosis, or cell death.

But for the best explanation of the when of our mortality, you have to ask the ecologists. They have a rough way of calculating life span. Basically, the larger the species, the slower its energy-delivery systems (all that internal tubing, all that complicated traffic); the lower the metabolic rate, the longer the life. Animals can live fast or burn slow. “If you’ve ever picked up a little mouse, it’s effectively vibrating, its heart is beating so fast,” says Brian Enquist, an ecologist at the University of Arizona. “A blue whale’s heart is like a slow metronome or the ringing of a church bell, a very slow bong... bong... bong.”

Yet both get roughly the same number of beats - 100 million and change, spread over two years for the mouse and roughly 80 years for the whale. “There’s this beautiful invariant: All living creatures have about the same amount of energetic life,” Enquist says. Yet while many animals outmass us humans, few outlive us. Why the long life for us lightweights? Like the hide of a rhinoceros or the claws of a tiger, human cleverness makes us tough to kill. That means random longevity-enhancing genes have a pretty good shot at evading natural selection. A bird that gets eaten in its second month of life never passes on whatever fluke mutation might have given it - and its progeny - an extra year or two.

As for the ecologists’ neat mathematical equation, “primates are a little different,” Enquist concedes. “For the number of heartbeats we have in our lives, we live a little longer than we should, and it’s a big mystery why that is.” He speculates that the difference for us outliers will be explained by brain size - or, rather, by how much time and energy humans spend growing their brains relative to the rest of their bodies. Why lavishing that extra energy on brainmaking translates into disproportionately long lives, Enquist isn’t sure (and at 37, he has only about 36 more years to figure it out). Luckily, the same biological aberration that allows people to contemplate their own mortality is responsible, albeit indirectly, for delaying it.

What causes gravity?
Isaac Newton first figured out the fundamental nature of gravity in the late 1600s. By unraveling the mysteries of planetary movement and Earth’s pull on its inhabitants, he described modern physics. But more than three centuries later, that’s still all we have: an understanding of the effect, with almost no grasp of the cause. Is gravity carried by an elementary particle? Is it some fundamental feature of spacetime we don’t understand? Why can’t gravity be reconciled with the better-understood quantum forces? All these questions remain unanswered. Many scientists think gravity must be generated by a massless particle, and have even dubbed it the graviton. But experiments to detect this entity (using a super-collider, for example) can’t be performed with current technology. “To generate the energy required to investigate a gravity particle, we believe, would produce a black hole,” says Harvard physicist Lisa Randall. “Space itself just breaks down.” Right now, mathematics is the best investigative tool for getting gravity to square with subatomic forces like electromagnetism. But making the math work requires dealing with exotic string theory notions like invisible 10-dimensional space. “We’ve always understood that gravity was different,” Randall says. “If we figure out why in the next 30 years, there will be another big, new question. I guarantee it.”


Why can’t we regrow body parts?
Slice through your finger with a kitchen knife and it’s bye-bye pinkie. But lop a leg off a salamander and it’ll grow a new one with little more fuss than we expend on a broken nail. Scientists looking to reverse tissue damage caused by disease, injury, or aging want to know how the agile amphibians do it - and why we can’t.

When salamanders are wounded, skin, bone, muscle, and blood vessels at the site revert to their undifferentiated states, forming a spongy mass called a blastema. It’s as if the cells go back in time and then retrace their steps to assemble a new organ or limb.

We seem to have this same basic program written in our genes: As embryos, we grew arms, legs, heart, lungs, and so on with no problem, and even as adults, one type of cell in our nervous system can dedifferentiate to repair damage. Others in our liver show similar flexibility. But for the most part, our regenerative pathway appears to be roadblocked. The reason may be that the rapid cell division required to sprout a new limb looks to the body a lot like the unchecked growth of cancer. Our longevity makes us vulnerable to accumulated DNA mutations, so we’ve evolved molecular brakes to keep tumors at bay. In order to unlock our regenerative capabilities, scientists will have to figure out how to override the stop signals without sparking a malignant rampage.
Why do we still have big questions?
Information is expanding 10 times faster than any product on this planet - manufactured or natural. According to Hal Varian, an economist at UC Berkeley and a consultant to Google, worldwide information is increasing at 66 percent per year - approaching the rate of Moore’s law - while the most prolific manufactured stuff - paper, let’s say, or steel - averages only as much as 7 percent annually. By this rough metric, knowledge is growing exponentially. Indeed, the current pace of discovery is accelerating so rapidly that it seems as if we’re headed for that rapture of enlightenment known as the Singularity.

In fact, we may be nearly there. A decade ago, author John Horgan interviewed prestigious scientists in many fields and concluded in his book The End of Science that all the big questions had been answered. The world of science has been roughly mapped out - structure of atoms, nature of light, theories of relativity and evolution, and so on - and all that remains now is to color in the details.

So why do we still have so many unanswered questions? Take the current state of physics: We don’t know what 96 percent of the universe is made of. We call it “dark matter,” a euphemism for our ignorance.

Yet it is also clear that we know far more about the universe than we did a century ago, and we have put this understanding to practical use - in consumer goods like GPS receivers and iPods, in medical devices like MRI scanners, and in engineered materials like photovoltaic cells and carbon nanotubes. Our steady and beneficial progress in knowledge comes from steady and beneficial progress in tools and technology. Telescopes, microscopes, fluoroscopes, and oscilloscopes allow us to see in new ways and to know more about the universe.

The paradox of science is that every answer breeds at least two new questions. More answers mean even more questions, expanding not only what we know but also what we don’t know. Every new tool for looking farther or deeper or smaller allows us to spy into our ignorance. Future technologies such as artificial intelligence, controlled fusion, and quantum computing (to name a few on the near horizon) will change the world - that means the biggest questions have yet to be asked.






No comments: