« March 2008 | Main | May 2008 »

April 30, 2008

Engineering Challenges

In a previous article (Preservation - The 21st Century Engineering Challenge, February 15, 2007), I wrote about the Engineering Challenges web site established by the National Academy of Engineering (NAE). The purpose of this web site was to solicit ideas as to what should be the "Grand Challenges for Engineering" for the twenty-first century. The NAE has about 2,000 members, all elected by their peers, and all superstar engineers. The list of submitted topics was reviewed by an expert panel [1], chaired by Former Secretary of Defense, William Perry. This panel announced its final list earlier this year at the annual meeting of the American Association for the Advancement of Science (Boston, February 15, 2008). The final challenges fall into four theme areas: health, sustainability, enhanced security, and quality of life.

• Engineering better medicines
• Advancing health informatics
• Providing access to clean water
• Providing energy from fusion
• Making solar energy economical
• Restoring and improving urban infrastructure
• Enhancing virtual reality
• Reverse engineering the brain
• Exploring natural frontiers
• Advancing personalized learning
• Developing carbon sequestration methods
• Managing the nitrogen cycle
• Securing cyberspace
• Preventing nuclear terror

All of these are worthy goals, but I have some criticism of this list. Although solar and fusion seem to be viable "carbon-zero" energy sources, it's a mistake to single-out these technologies. Instead, the goal should be "economical, renewable energy" in itself. Preventing nuclear terror is too specific, also. Bio-terror may be more dangerous, so the challenge should be "terror prevention." Of course, this leads to the uncomfortable debate about funneling money into social programs and psychological research to stem terrorism at the root cause. The Engineering Challenges web site asks its readers to vote on their favorite challenges; but making a selection from this list would be like answering the question, "Do you still beat your spouse?" Your selection may not be on the list.

Robert Socolow, a professor of mechanical and aerospace engineering at Princeton University and a member of the panel, said in an interview [4] that one motivation for the list was "to make sure that young people know that engineering is an exciting profession, one that makes a difference to society." The Engineering Challenges project is sponsored by a grant from the U.S. National Science Foundation, Award # ENG-063206.

1. Committee members are William Perry, Alec Broers, Farouk El-Baz, Wesley Harris, Bernadine Healy, W. Daniel Hillis, Calestous Juma, Dean Kamen, Raymond Kurzweil, Robert Langer, Jaime Lerner, Bindu Lohani, Jane Lubchenco, Mario Molena, Larry Page, Robert Socolow, J. Craig Venter, and Jackie Ying. A full list of the panel members, and their biographies, can be found at http://www.engineeringchallenges.org/cms/7124.aspx.
2. Jonathan Wood, "Editorial: Making the future," Materials Today, vol. 11, no. 4 (April, 2008), p. 1.
3. Jay Vegso, "NAE Grand Challenges in Engineering" (Computing Research Association Web Site, February 25, 2008).
4. Teresa Riordan, "Panel identifies greatest technological research challenges of the 21st century" (Princeton University Press Release, February 15, 2008).
5. National Academy of Engineering Grand Challenges in Engineering Web Site.

April 29, 2008

Superheavy Element Unbibium

In a previous article (Where's All the Technetium?, August 29, 2006), I wrote about the nuclear shell model and how it predicts the stability of some isotopes of elements. Although it appears that elements become less stable and more radioactive as we traverse down the Periodic Table, this is not always the case. The longest lived isotope of technetium (98Tc) has a half life of 4.2 million years, so any technetium present at the formation of the Earth is now gone. However, technetium stands between molybdenum and ruthenium in the Periodic Table, and both of these elements have unconditionally stable isotopes.

The nuclear shell model predicts that certain magic numbers of neutrons and protons will lead to stable isotopes of some high atomic number (superheavy) elements. The shell model predicts also islands of stability around these magic numbers in which longer-lived isotopes will be found. The usual technique for producing a superheavy element is to bang large nuclei together and hope they stick. This is how element 122, unbibium (a temporary name representing the number 122, "un"="one" and "bi" = "two") was originally produced. If a recent research report is found to be accurate, there's an isotope of element 122 that's so stable that it exists in nature [1-2].

A group at the Hebrew University of Jerusalem decided to sift through a quantity of thorium with an inductively-coupled-plasma mass spectrometer. Thorium exists mainly as two isotopes, 230Th and 232Th, and these isotopes dominated the mass spectrum. There were also other masses detected that correlate with the expected oxides and hydrides of thorium. What appeared in the mass spectrum also was an element of atomic number 122 and weight 292 with abundance about one part per trillion (10-12). This may seem like a small quantity, but thorium is about as abundant as lead, so this is an unexpectedly high natural abundance for this element. The half-life of this element is estimated to be in excess of 100 million years [2]. Of course, great care was taken to exclude possible sources of error in the experiments, but it will take a few independent confirmations before this result is widely believed.

1. "First superheavy element found in nature" (ArXiv Physics Blog, April 28, 2008).
2. A. Marinov, I. Rodushkin, D. Kolb, A. Pape, Y. Kashiv, R. Brandt, R.V. Gentry, and H.W. Miller, "Evidence for a long-lived superheavy nucleus with atomic mass number A=292 and atomic number Z=~122 in natural Th" (ArXiv Preprint, April 24, 2008).

April 28, 2008

A Sinister Chirality

Sinister is the Latin word for "left. It lingers in the English language as a sleight to left-handed people; and it exists in medical terminology as the official word for "left." Thus, physicians write OD (oculus dexter) and OS (oculus sinister) for right eye and left eye, respectively. On a human scale, this is definitely a non-sinister world, since fewer than ten percent of adults are left-handed, a trait more common in men than women. However, educated "lefties" shouldn't feel so bad. A study by researchers at Lafayette College and Johns Hopkins University has shown that left-handed male college graduates are significantly richer than their dexterous classmates [1]. Strangely, this correlation does not apply to women. But all lefties, whether male or female, should be heartened by the fact that the biological world is left-handed on the molecular level [2].

The quality of "handedness" in chemistry is called "chirality" from the Greek word for hand, χειρ. Mathematically, an object exhibits chirality if it is not identical to its mirror image. Another way to state this is that a chiral object can't be made to look like its mirror image by rotation and translation. The original chiral object and its mirror image are called enantiomorphs. Typically, the drugs we take are mixtures of enantiomorphs, but only one handedness has the desired pharmacological action. If it were possible to separate enantiomorphs easily, drug companies would be forced to do this, but it isn't. What we ingest is a racemic mixture of equal quantities of enantiomorphs. In general, a chemical reaction will give a racemic mixture. This lead to a mystery as to why, except for a few bacteria, biology favors left-handed amino acid molecules as the essential building blocks of life over their right-handed enantiomers.

In the early 1980s, I attended a crystallography workshop, one of the speakers at which was Sir Charles Frank. I didn't know it at the time, but Sir Charles was author of an often-cited paper on the possible mechanism for favored chirality [2]. He reasoned that an initial random and tiny excess of one chirality in a system that copies this chirality, but feeds also from a reservoir of both chiralities, would lead to chiral exclusivity.

At the 235th national meeting of the American Chemical Society (New Orleans, April 6-10), Ronald Breslow of Columbia University suggested that this preference for left-handed molecules came from outer space [3]. No, he didn't propose that life on Earth arose from the garbage dump of an extraterrestrial picnic (that's been proposed before!). Breslow thinks that the Earth was seeded with organic molecules transported via meteorites. These meteorites would initially have a racemic mixture of molecules, but exposure to polarized radiation from neutron stars would have selectively destroyed a few percent of one enantiomer over the other. This mechanism has been proven through analysis of organic material contained in recent meteor falls to Earth. Because of this meteor material, the pre-biotic Earth would favor formation of left-handed amino acids over right-handed.

1. Joel Waldfogel, "Sinister and Rich - The evidence that lefties earn more" (Slate, August 16, 2006).
2. C.F. Frank, Biochim, Biophys. Acta, vol. 11 (1953), pp. 459-463.
3. Charmayne Marsh and Michael Bernstein, "Meteorites delivered the 'seeds' of Earth's left-hand life" (ACS Press Release, April 6, 2008).

April 25, 2008

Consumer Nanotechnology

According to the ad hoc nanotechnology watchdog group, The Project on Emerging Nanotechnologies (PEN), there are between three to four new consumer products based on nanotechnology entering the marketplace each week [1-2]. PEN keeps an inventory of consumer nanotechnology items, and this inventory now exceeds six hundred items [3]. Not all of these are occult advancements to your cellphone or music player circuitry. Some of these are applied directly to your body or ingested. There are nine nano toothpastes on the market, one of which, Swissdent Nanowhitening Toothpaste, contains nanoparticles of calcium peroxide, the "Power Enzymes" bromelain (extracted from from pineapple) and papain (extracted from papaya), as well as silicate polishing agents.

Silver, the most common consumer nanoparticle additive, is used in twenty percent of consumer nano products. Silver has a known germicidal effect, and silver nitrate eye drops were used historically to protect newborns from infection. Not surprisingly, the common cosmetic additives zinc oxide, titanium dioxide and silica are used in nano products. Another common use of nanoparticles is as a colorant, and these are especially useful in automotive products exposed to solar ultraviolet radiation, since they are resistant to fading.

About $50 billion in manufactured goods contained nanotechnology in 2006, and estimates are that nanotechnology will be incorporated into $2.6 trillion of manufactured goods by 2014 [1-2]. This would be fifteen percent of all articles sold. Annual nanotechnology R&D investment is estimated to be more than $12 trillion. One thing that worries nanotechnologists is possible public backlash, perhaps unfounded, that would play out similarly to what's happened in the genetically modified organism (GMO) marketplace. Will Nano-Free stickers be placed alongside GMO-Free on some products?

As recited on its web site, The Project on Emerging Nanotechnologies "collaborates with researchers, government, industry, NGOs, policymakers, and others to look long term, to identify gaps in knowledge and regulatory processes, and to develop strategies for closing them. The Project will provide independent, objective knowledge and analysis that can inform critical decisions affecting the development and commercialization of nanotechnologies." [4] One area of concern that the Project has identified is the indifference of the US to potential risks from nanotechnology. European nations invest twice as much as the US in nanotechnology risk assessment.

1. Alex Parlini, "New nanotech products hitting the market at the rate of 3-4 per week" (Project on Emerging Nanotechnologies Press Release, April 24, 2008) .
2. "New nanotech products hitting the market at the rate of 3-4 per week" (PhysicsOrg, April 24, 2008)
3. Online inventory of manufacturer-identified nanotech goods (Project on Emerging Nanotechnologies).
4. Mission Statement for Project on Emerging Nanotechnologies.

April 24, 2008

Where is Everybody?

In 1950, the Italian-American physicist, Enrico Fermi, asked, "Where is everybody," during lunch with colleagues. Fermi, who had been awarded the 1938 Nobel Prize in Physics for his work on induced radioactivity, was wondering why we had never been visited by extraterrestrials. Our knowledge of the universe in 1950, although imperfect by today's standards, still allowed the existence of many extrasolar planets, some of which should have produced intelligent life. Scientists since the time of Copernicus have realized that the Earth does not exist in a favored position, and Fermi's statement, now known as the Fermi Paradox, expressed surprise that what should have happened has not happened. Current advances in our understanding of the universe have made Fermi's paradox more of a paradox today than when it was first stated.

If the Principle of Uniformity, which states that the laws of nature found on Earth are the same throughout the universe, is true, then the same processes that lead to life on Earth should exist throughout the universe. The universe is estimated to have between 1011 to 1012 galaxies, each of which have upwards of 1012 stars each, for a grand total of almost 1024 stars. Furthermore, the age of the universe is estimated to be about 13.7 billion years, allowing sufficient time for the development of many advanced civilizations, quite of few of which should exist in our own galaxy. Our own Milky Way galaxy, which is about five times larger than an average galaxy, contains about 400 billion stars. However, we've had no contact from extraterrestrials, whether by radio or a spacecraft. Someone has termed this the Great Silence [1].

Previously, the idea of extrasolar planets was supposition based on the Principle of Uniformity. In recent years, more than two hundred extrasolar planets have been observed. Charles Lineweaver of the Mount Stromlo Observatory estimates that the median age of planets in our galaxy is 6.4 billion years, so a majority of planets are older than the Earth. Why the Great Silence?

One possible reason, which I call the Star Trek Postulate, is that we are quarantined until we grow sufficiently to join the galactic community. Another is the idea that there is no desire among extraterrestrials to explore or communicate with others. It may be that extraterrestrials are so advanced that Earthlings are not worth their interest. Another idea is that communications technology has advanced far beyond radio, and we can't tap into the streams of data that pervade our solar system. The worst possible scenario is that intelligent life is flawed to such an extent that civilizations destroy themselves as quickly as possible, either by environmental problems, accident, or war. It may be that we are truly alone in the universe, an idea that makes me more uncomfortable than the prospect of seeing a little green man in my backyard. Andrew Watson of the University of East Anglia (Norwich, Norfolk, England) has published a recent analysis [2-3] in which he concludes that the probability of intelligent life in the universe is very low. Watson argues that, because of the way stars evolve, any planet has only a finite amount of time for life to evolve, and life on Earth evolved only late in this window.

1. George P. Dvorsky, "The Fermi Paradox: Back with a vengeance" (August 04, 2007).
2. Ned Potter, "Anyone Out There? Maybe Not" (ABC News Blog, April 18, 2008).
3. Andrew J. Watson, "Implications of an Anthropic Model of Evolution for Emergence of Complex Life and Intelligence," Astrobiology, vol. 8, no. 1 (January, 2008), pp. 175-185.

April 23, 2008

A Measure of Intelligence

We've eliminated many prejudices in our enlightened age, but there's one that resists extinction. We all believe that "intelligent" people are better than other people. In an effort to quantify this prejudice, we have IQ tests. Despite the objections of the Educational Testing Service that it's not an intelligence test, we have the SAT, the bane of secondary school students and society's grand predictor of success. For the really masochistic, there's also the Graduate Record Examination (GRE), the MCAT, the GMAT, and myriad other tests.

Is there any way to measure intelligence without a pencil and paper (nowadays, keyboard and monitor) test? Many years ago, I read a study that related intelligence to a simple reaction time test. This was in an era before video games, but the analogy to a video game is quite good. A person is presented with two bars on a screen, one on top of the other. One bar is shorter than the other, and a test subject was asked press one button or another to indicate whether the short bar was on the top or bottom. The bars were flashed for just a few tens of milliseconds to hundreds of milliseconds. The more intelligent test subjects (as measured by an IQ test) were better able to discern the difference in bars at shorter viewing times. A more recent study [1] in Sweden found that people with a higher IQ score are better at keeping time. Specifically, test subjects were asked to tap out a simple, regular rhythm. The ones with least variation in the tapped rhythm scored highest on an intelligence test. The researchers also found a correlation between intelligence, timekeeping ability, and a high volume of white matter in the brain's frontal lobes. The frontal lobes are the brain regions thought to be involved in problem solving, planning and managing time (the Microsoft Project area of the brain?).

Most of us use the cognition-enhancing drug caffeine, every day, since we drink coffee, and we don't think twice about it. As a quick scan through any issue of Science will demonstrate, there are more scientists involved in life sciences than physical sciences, and many of these are aware of more potent cognition-enhancing drugs. There is, of course, a desire for us to do our best work, and since our work is mostly "brain work," there's always a desire to gain an extra edge over our colleagues. Knowing this, the editors of the journal, Nature, conducted an informal survey of its readers' use of cognition-enhancing drugs [2-4]. Specifically, Nature asked about the usage of three generally available drugs; namely, methylphenidate (Ritalin), which is prescribed for attention-deficit disorder; modafinil (Provigil), which is prescribed for narcolepsy and jet-lag; and beta-blockers, which have an anti-anxiety effect. These drugs seem also to have a cognitive-enhancing affect on normal people. Depending on age, between 10-20% of respondents admitted taking drugs solely for the purpose of cognition-enhancement, and not for a medical reason. See the references for more information.

1. Katarina Sternudd, "Intelligence and rhythmic accuracy go hand in hand" (Press Release, Karolinska Institutet (April 16, 2008).
2. Editorial: "Enhancing, not cheating," Nature, vol. 450 (November 15, 2007), p. 320.
3. Brendan Maher, "Poll results: look who's doping," Nature, vol. 452 (April 9, 2008), pp. 674-675.
4. Brendan Maher, "Results from our survey on neuroenhancment" (Nature, April 4, 2008).

April 22, 2008

Higher Temperature Superconductors

We don't have a room temperature superconductor just yet, but we might be on the verge of something new. In a previous article (Magic Numbers of Atoms, April 17, 2008), I wrote about the potential high temperature superconductivity of aluminum nanoclusters containing 45 or 47 atoms. The unusual properties of these nanoclusters is a quantum mechanical trick that manipulates the possible states of the electrons in a material through geometrical constraint. The traditional class of high temperature superconducting materials employ a similar trick, since these are layered oxides in which atoms are arranged in the crystal in identifiable sheets. One example of these is the exotic material HgBa2Ca2Cu3O8+x, which has a superconducting transition temperature greater than 130 K. These materials have been around since 1986, and the transition temperature seems to have topped-out at the 130 K value.

The layered idea still has a lot to commend itself, since scientists from Japan and China have been publishing steadily increasing superconducting transition temperatures for phosphorus-based and arsenic-based layered oxides [1-3]. It's interesting that these superconductors contain iron, also, since magnetism and superconductivity don't co-exist. Of course, iron atoms need not be magnetic, but the presence of iron in a superconductor is unusual just the same. Another thing that is interesting to me is that some of this work has been published in chemical journals [2-3]. It seems as if the field of superconductivity has transformed from a physicist's playground to the realm of material science.

It all started about two years ago with the iron-based, layered, oxy-pnictide LaOFeP, which has alternating layers of lanthanum oxide (La3+O2-) and iron pnictide (Fe2+P3-), and a superconducting transition temperature of about 4 K [2]. In February, the composition La[O1-xFx]FeAs (x = 0.05-0.12) was published with a transition temperature as high as 26 K [3]. Just a few weeks ago, scientists from the University of Science and Technology of China (Hefei) reported 43 K for samarium oxygen fluorine iron arsenide (SmO1-xFxFeAs). The most recent research, published in preprint form [4] by scientists from National Laboratory for Superconductivity, Institute of Physics and Beijing National Laboratory for Condensed Matter Physics, Chinese Academy of Sciences (Beijing) was on an iron oxide material that doesn't contain fluorine. This is the layered compound, ReFeAsO1-δ, where Re is the rare earth element Sm, Nd, Pr, Ce, or La. They found that the transition temperature increases as the size of the rare earth atoms decreases, so that SmFeAsO1-δ has a transition temperature of 55 K.

1. Adrian Cho, "Second Family of High-Temperature Superconductors Discovered" (ScienceNOW Daily News, April 17, 2008).
2. Yoichi Kamihara, Hidenori Hiramatsu, Masahiro Hirano, Ryuto Kawamura, Hiroshi Yanagi, Toshio Kamiya, and Hideo Hosono, "Iron-Based Layered Superconductor: LaOFeP," J. Am. Chem. Soc., vol. 128, no. 31 (July 15, 2006), pp. 10012-10013.
3. Yoichi Kamihara, Takumi Watanabe, Masahiro Hirano, and Hideo Hosono, "Iron-Based Layered Superconductor La[O1-xFx]FeAs (x = 0.05-0.12) with Tc = 26 K," J. Am. Chem. Soc., vol. 130, no. 11 (February 23, 2008), pp. 3296-3297.
4. Zhi-An Ren, Guang-Can Che, Xiao-Li Dong, Jie Yang, Wei Lu, Wei Yi, Xiao-Li Shen, Zheng-Cai Li, Li-Ling Sun, Fang Zhou, Zhong-Xian Zhao, "Novel Superconductivity and Phase Diagram in the Iron-based Arsenic-oxides ReFeAsO1-delta (Re = rare earth metal) without F-Doping" (ArXiv Preprint, April 16, 2008).

April 21, 2008

The Chaos Man

Computers and chaos are intimately connected (but not for the reasons you think!). According to Greek mythology, as catalogued in Hesiod's Theogony (c. 700 BC), Chaos (Χαος) was the original, undifferentiated state of the universe. The Roman poet, Ovid (43 BC-17 AD), perhaps influenced by the state of the world at the time, described Chaos as a completely disordered state, the definition that endures today.

The connection to computing came in 1961. That's when the mathematician and meteorologist, Edward Lorenz, was doing simulations of weather on a computer. He had interrupted a program, and then he proceeded to restart it by inputting the exit values for his parameters as initial conditions for the subsequent run. He discovered that the results were very different than running the program from start to end. The problem was that the printed values he had used had slightly less precision than the stored values in the program; that is, just a small change in values (about 0.1%) gave vastly different results. Lorenz realized that this problem was not just a problem of computation, but it had physical significance. In a 1972 presentation, Predictability: Does the Flap of a Butterfly's Wings in Brazil Set Off a Tornado in Texas?, Lorenz coined the term, "butterfly effect," for physical processes that depend strongly on initial conditions. Edward Lorenz (b. May 23, 1917) died on April 17, 2008, at the age of 90.

There was an inkling of chaos in pure mathematics at the end of the nineteenth century. Henri Poincaré showed that the Three Body Problem, the orbits resulting from the gravitational attraction of three heavenly bodies, could not be solved, although the equations of motion were simple. Poincaré's findings were known by most astronomers and astrophysicists, but they were unknown in other fields. Lorenz's work highlighted the chaos principle in a more down-to-earth context, and the concurrent rise of ubiquitous computing set chaos into exponential growth. Lorenz continued to work with chaos, and he is responsible for the eponymous Lorenz attractor.

Lorenz was awarded the Crafoord Prize in 1983 by the Royal Swedish Academy of Sciences. The Crafoord Prize recognizes achievement the scientific fields not included in the Nobel Prizes. In 1991, Lorenz was awarded the Kyoto Prize for basic sciences in the field of earth and planetary sciences for his seminal work in computer modeling of the weather. The prize committee wrote that Lorenz's chaos principle "profoundly influenced a wide range of basic sciences and brought about one of the most dramatic changes in mankind's view of nature since Sir Isaac Newton." According to an MIT press release [2], some scientists think that the twentieth century will be remembered for three scientific revolutions; namely, relativity, quantum mechanics and chaos theory.

As were most early computer scientists, Lorenz had a background in mathematics, receiving an AB in mathematics from Dartmouth College in 1938, and an AM in mathematics from Harvard University in 1940. He served as a weather forecaster in World War II, and then obtained an SD in meteorology from MIT in 1948. He was invited to remain at MIT, where he served as a professor from 1948-1987, and department head for some of this period. In 1967, Lorenz authored the textbook, "The Nature and Theory of the General Circulation of the Atmosphere." He was awarded emeritus professor status in 1987. Lorenz was known for being a quiet man. His colleagues said that getting him to talk was painfully difficult, and he rarely co-authored papers.

My favorite example of chaos is the logistic map, an iterated number series of the form

xn+1 = r xn(1-xn)

where the initial value of x, x0 is a number between zero and one, and r is a positive number. This equation is very sensitive to the value of r from 3.57 and 4. See Ref. 4 for more details.

1. Kenneth Chang, "Edward N. Lorenz, a Meteorologist and a Father of Chaos Theory, Dies at 90" (New York Times, April 17, 2008).
2. "Edward Lorenz, father of chaos theory and butterfly effect, dies at 90" (MIT Press Release, April 16, 2008).
3. Seth Borenstein, "MIT prof Edward Lorenz, father of chaos theory, dies at 90" (Wired News, April, 16, 2008).
4. logistic map (Wikipedia).

April 18, 2008

Physics Companies

Quite a few years ago, in the Total Quality Management era, I was a member of a team that prepared a brochure on our research capabilities. The team needed to get this brochure into the hands of the company's leading technologists. The problem was making a list of the "company's leading technologists." My solution was to search the previous five years of company patents, deciding that the named inventors were the company's leading technologists. The American Institute of Physics took a similar tactic in determining the twenty "Top Physics Companies" for a full page ad in a recent issue of Physics Today [1]. The companies selected were the ones with the most readers of Physics Today, the general interest physics magazine sent free to members of the societies affiliated with the American Institute of Physics; and, rarely, to paid individual subscribers. There are about 120,000 subscribers to Physics Today [2]. The August, 2007, subscriber list indicates the following top physics companies:

The Aerospace Corporation
Northrup Grumman
Lockheed Martin
General Electric
Alcatel/Lucent/Bell Labs
Chevron Texaco
General Atomics
Varian Medical Systems
Ball Aerospace and Technology
SRI International
JDS Uniphase
Boeing Company
Intel Corporation

Defense contractors figure prominently on this list. Honeywell is not of this list, so I'm a fish out of water (perhaps it's phys out of water). Supposedly, I'm a materials scientist, which may account for my corporate longevity. There's a lot of physics in many Honeywell products, but the prime example would be our ring laser gyroscope [3] which functions via the Sagnac effect.

1. Physics Today, vol. 61, no. 3 (March 2008), p. 76.
2. Physics Today FAQs.
3. Honeywell Inertial Products Web Site

April 17, 2008

Magic Numbers of Atoms

Aluminum is an excellent electrical conductor, having a resistivity (26.5 nano-Ohm-m) just slightly higher than copper (16.8 nano-Ohm-m), but it's a poor superconductor. The superconducting transition temperature for aluminum, the temperature below which aluminum is a superconductor (in zero magnetic field) is 1.20 K. This is quite a bit below liquid helium temperature (4.2 K), much smaller than for an alloy of niobium and tin (18 K), and very much smaller than for the so-called high temperature superconductors (>130 K for the exotic material HgBa2Ca2Cu3O8+x). I parenthetically added "in zero magnetic field" in quoting the transition temperatures, since too high a magnetic field will upset the superconducting state of a material.

Aluminum was just a footnote in the history of superconductivity until 1999, when physicists from Louisiana State University (Baton Rouge) decided to try something different with thin superconducting films of aluminum. The easy way to apply a magnetic field to a thin film is perpendicular to the surface, but they decided to apply a field in-plane "to see what would happen [1]." They found that a field of 5.9 Tesla (59 kGauss) would destroy the superconductivity at a temperature of 30 mK, but they needed to reduce the field to 5.6 Tesla (56 kGauss) for it to reappear. The aluminum superconductivity was hysteretic, something that was never seen before. Of course, when something is unexpected, scientists do quite a few experiments to see what's wrong. In this case, everything was right, and the effect was confirmed [2]. One strange consequence of this hysteresis is that at certain magnetic fields aluminum is only superconducting in a range of temperatures; that is, it will become superconducting when heated.

After all this, aluminum still had some surprises left for low temperature physicists. Physicists at Indiana University (Bloomington) have published a preprint paper of their work on the superconductivity of aluminum nanoclusters containing 45 or 47 atoms [3]. They see evidence for a superconducting transition temperature of 200 K, higher than that for the high temperature superconductors. They are cautious, because their experiments demonstrate just one of three tests required for superconductors. These three tests are zero electrical resistance; evidence of a superconducting phase transition; and the particular behavior of superconductors in a magnetic field, which is called the Meisner effect. The Indiana University team has demonstrated a large change in the heat capacity of the aluminum nanoclusters, so their experiments show a possible superconducting phase transition.

The possibility of a high superconducting transition temperature for metal nanoclusters with a "magic number" of atoms was predicted two years ago in a theoretical paper by Yuri Ovchinnikov at the Landau Institute for Theoretical Physics (Moscow) and Vladimir Kresin at Lawrence Berkeley National Laboratory [4-5]. This interaction of electrons is not unlike the nuclear shell model that predicts special stability for magic numbers of nucleons. For those of you who play the lottery, the nuclear magic numbers are 8, 20, 28, 50, 82, 126, and 184. Since the New Jersey Lottery Cash 5 uses numbers from 1 to 40, the proper numbers to play would be selected from (1 + N mod 40) = {9, 21, 29, 11, 3, 7, 25} = {3, 7, 9,11, 21, 25, 29}. Is this a winning strategy? Here are the last four times 3 and 7 were part of the winning set:

• 01/17/2008 - 3 7 19 27 30
• 10/25/2007 - 3 7 8 11 23
• 09/14/2007 - 3 7 16 28 29
• 06/04/2007 - 3 7 28 34 38

1. Warming Up to Superconductivity (Physical Review Focus, November 1, 1999).
2. V. Yu. Butko and P. W. Adams, "State Memory and Reentrance in a Paramagnetically Limited Superconductor," Phys. Rev. Lett., vol. 83 (1999), pp. 3725-3728.
3. Nanoclusters break superconductivity record (ArXiv Blog, April 11, 2008).
4. Vladimir Z. Kresin, Yurii N. Ovchinnikov, "Shell Structure and Strengthening of Superconducting Pair Correlation in Nanoclusters" (ArXiv Preprint, March 28, 2008).
5. Vladimir Z. Kresin and Yurii N. Ovchinnikov,"Shell structure and strengthening of superconducting pair correlation in nanoclusters," Phys. Rev., vol. B74 (2006), p. 024514.

April 16, 2008

John Archibald Wheeler

I met the eminent theoretical physicist, John Archibald Wheeler, in March, 2000, at a symposium at the Graduate Center of the City University of New York on the history of quantum mechanics [1]. Wheeler, who was a very active octogenarian at the time, kindly autographed my copy of his book, "Geons, Black Holes, And Quantum Foam: A Life in Physics." John Archibald Wheeler (b. July 9, 1911, Jacksonville, Florida), died on April 13, 2008, at the age of 96 [2-4]. Wheeler was known as the popularizer of the term, black hole, for a gravitationally collapsed star, but he was modest enough to point out that the term was suggested to him by an unnamed physicist at a meeting. Among physicists, he was an icon in the fields of relativity and quantum mechanics.

Wheeler's career as a theoretical physicist of his generation was unique, since he entered into the field without a passage through New York City schools. He graduated from Baltimore City College (despite its name, a secondary school, and not a "college") in 1926, and obtained his Ph.D., at the young age of 21, from Johns Hopkins University in 1933. At Johns Hopkins, he worked under Karl Herzfeld. As a freshly-minted theoretical physicist, Wheeler made the almost obligatory pilgrimage to Copenhagen to study for a year with Niels Bohr. He was a professor of physics at Princeton University from 1938-1976, and his most famous student, Richard Feynman, received his Ph.D. under his tutelage in 1942. During World War II, Wheeler worked on the Manhattan Project. Wheeler was forced to retire from Princeton in 1976 because of a mandatory retirement policy, so he left for the University of Texas (Austin) where he was Director of the Center for Theoretical Physics from 1976 to 1986. He returned to Princeton as an emeritus professor for the last years of his life. Wheeler never received the Nobel Prize, an oversight perhaps relating to the fact that, although he contributed to research in many areas, he had no principal discovery.

When Niels Bohr came to Princeton in 1939, it was expected that he would spend most of his time with Einstein. Instead, Wheeler and Bohr developed the theory of nuclear fission. Wheeler, who collaborated with Einstein in his later years, transformed Princeton University into a leading center for studies in general relativity. In 1973, Wheeler, along with Kip Thorne and Charles Misner, wrote the twelve hundred page text book, "Gravitation," which is still in print. He is credited with putting the "physics" back into relativity, which had become a branch of mathematics. There's a similar problem today, since string theorists are still trying to decide whether they're mathematicians or physicists.

I'll leave you with a quotation from Wheeler [4]. "If you haven't found something strange during the day, it hasn't been much of a day."

1. The Play's the Thing! (This Blog, November 5, 2007).
2. Dennis Overbye, "John A. Wheeler, Physicist Who Coined the Term "Black Hole," Is Dead at 96" (The New York Times, April 14, 2008).
3. "Black hole" scientist dies at 96 (BBC News, April 15, 2008).
4. Martin Weil, "John A. Wheeler, 96; Helped Build Atom Bomb, Studied Black Holes" (Washington Post, April 15, 2008, Page B07).

April 15, 2008

Racetrack Memory

Early in my career, I was involved in the development of magnetic bubble memory materials. I was working at the Materials Research Laboratory of Allied Chemical Corporation, which was merged into Honeywell after becoming Allied Corporation, and then Allied-Signal. As I recalled in a previous article (Thirtieth Anniversary, October 31, 2007), magnetic bubble memory materials are now unknown beyond the few remaining specialists who worked in this area. One of these is Stuart Parkin who still investigates magnetic memory for IBM. Parkin has made headlines recently for a new type of magnetic device called a racetrack memory [1-2]. Parkin's team has published their research on racetrack memory in a recent issue of Science [3].

The racetrack memory concept was patented by Parkin in 2004 [4], but it's taken this long to build a demonstration device. A racetrack memory is inherently simple, but the nanoscale of the device makes it difficult to produce. However, it's this same nanoscale that makes the device attractive as a high density memory. Data is stored as magnetic domains of specific polarity, just as it is in other magnetic memories. In the racetrack memory, the domains are in nanoscale permalloy wires. Data domains are written into one end of the wire by a magnetic field, the domains are transported along the wire by a current pulse through the wire, and they are read at the other end of the wire by a detector. Parkin's racetrack demonstrator has only three bits, but it illustrates that data domains can be moved as a group without upsetting the memory contents.

The racetrack memory concept has many advantages. First, it has no moving parts, so it's much more reliable than hard drives and other rotating memory devices. Unlike other solid state non-volatile memory devices, such a flash drives, racetrack memory would have unlimited read/write cycles (flash memory sticks have less than 100,000 read/write cycles). Racetrack memory is also fast, about 100,000 times faster than flash memory [5]. One question that must be addressed is power consumption. Although racetrack memory is non-volatile, so power need not be applied to keep the memory contents, a current is required to shuttle the bits. Racetrack memory is clearly in its early days, so it will be interesting to see the progress made in the next several years.

1. Paul Marks, "IBM creates working racetrack memory device" (New Scientist News Service, April 10, 2008).
2. Agam Shah, "IBM Lays Claim to Cheaper, Faster Memory" (IDG News Service, April 10, 2008).
3. Masamitsu Hayashi, Luc Thomas, Rai Moriya, Charles Rettner and Stuart S. P. Parkin, "Current-Controlled Magnetic Domain-Wall Nanowire Shift Register," Science, vol. 320, no. 5873 (April 11, 2008), pp. 209-211.
4. Stuart S. P. Parkin, "Shiftable magnetic shift register and method of using the same," US Patent No. 6834005 (Dec 21, 2004).
5. Masamitsu Hayashi, Luc Thomas, Charles Rettner, Rai Moriya, Yaroslaw B. Bazaliy, and Stuart S. P. Parkin, "Current Driven Domain Wall Velocities Exceeding the Spin Angular Momentum Transfer Rate in Permalloy Nanowires," Phys. Rev. Lett., vol. 98 (January 19, 2007), p. 037204.

April 14, 2008

Hot Information

There's a saying in internet circles that "Information wants to be free." In the real world, however, storing and serving information is a very energy-intensive task, and energy is far from free. Aside form the cost of the electricity to power computer servers and their storage units, there's the additional expense of removing waste heat. Your desktop computer has a 200 watt power supply, and although much of that power is efficiently used in computation, the net result of all that computation is still heat. Your desktop computer is rarely utilized to the extent that all 200 watts are required, but it's a different story for a network server which may require 200 watts at all times. Multiply this by the hundreds, perhaps thousands, of servers in a data center, and you have a very high electric bill for running the servers, plus an additionally high electric bill for cooling the server room. Computer manufacturers are mindful of these problems, and they have been steadily decreasing the power consumption of their server hardware. Today's blade servers are highly efficient, but increased network traffic has erased any advantages.

In an example of careful planning, Google has established a data center in Oregon, along the Columbia River, an area of cheap and plentiful electrical power. Microsoft is building a data center in Northlake, Illinois, where electrical power is about 5 cents per kilowatt-hour, much less than the 9 cents per per kilowatt-hour it pays in California, and much less than we folk in Northern New Jersey pay (11 cents per kilowatt-hour). Microsoft is building another data center in Quincy, Washington, that uses 2 cent per kilowatt hour hydroelectric power. The cool climate in Oregon and Illinois may be a factor also in such decisions.

If we were to take the cool climate idea to a logical extreme, shouldn't we have data centers in Antarctica? Well, maybe the logistics are too hard for Antarctica, but what about Iceland? Apparently, Iceland is pitching the idea of making Iceland into the data center capital of the world [3]. The logic is inescapable. Not only does Iceland have its cold climate, it also has a lot of vacant land and cheap geothermal energy. Is moving to Iceland worth the bother? A large data center costs about $200 million to construct, and it needs to be serviced by a huge internet data "pipe." In 2006, the last year for which data is available, data center power costs in the US were $4.5 billion. I think Iceland is a good idea, but I would stay away from the more active volcanoes.

1. John Markoff and Saul Hansell, "Hiding in Plain Sight, Google Seeks More Power" (New York Times, June 14, 2006).
2. Rich Miller, "Microsoft Plans $500M Illinois Data Center" (Data Center Knowledge, November 05, 2007).
3. Steve Hamm, "It's Too Darn Hot - The huge cost of powering and cooling data centers has the tech industry scrambling for energy efficiency," Business Week (March 20, 2008).

April 11, 2008

Rubik's Cube

Everyone is familiar with the supposed "toy," Rubik's Cube, which is actually an adventure in mathematics for the unsuspecting. The popularity of Rubik's cube might arise from the fact that it was patented only in Hungary by its inventor, Ernö Rubik, so it was copied everywhere. Rubik is an architect and sculptor, and his principal invention is the array of interlocking plastic shapes that allow rotations of the cube structure that he patented in 1974. Similar games were sold with magnetic interlocks, but the molded plastic construction of Rubik's allowed inexpensive mass-production. It is estimated that the number of Rubik's cubes manufactured is a few percent of the world's population, a large number indeed.

The traditional cube, composed of 3x3 sides, will have (8! x 38-1) x (12! x 212-1)/2 = 4.3 x 1019 possible starting positions [1], so you would think it would be impossible to solve. Obviously, a brute-force computer approach won't work. There are, however, some simple procedures for solving a cube in about twenty moves [1], but this is merely anecdotal evidence. The real chore, for serious mathematicians, is finding an optimal solution for Rubik's Cube and deciding what the actual minimum really is. This minimum number of moves for solving Rubik's cube is not known exactly, but the number has decreased from 42 in 1995, to 40 in 2005 [3], and recently to 25 [4-5].

The mathematician, Tomas Rokicki, has proved that Rubik's cube can be solved in at most 25 moves [4-5]. Very much like the proof of the Four Color Theorem, Rokicki used a computer-assisted approach. He first used mathematics to reduce the possible moves on the cube into two billion sets, each containing twenty billion elements. Using significant computer time (1500 hours on a fast workstation) he found that most of these sets are equivalent. In this effort, he used one of the most popular tools of mathematicians and physicists; namely, symmetry. Further number-crunching showed that there are no cube configurations that require 26 moves, thus lowering the limit to 25 moves. As a side-effect of this research, Rokicki presents an algorithm that can find solutions in twenty moves at a very high rate.

1. Rubik's Cube (Wikipedia).
2. How to solve the Rubik's Cube (Wikipedia).
3. Silviu Radu, "A New Upper Bound on Rubik's Cube Group" (ArXiv, December 23, 2005).
4. Rubik's cube proof cut to 25 moves (ArXiv Blog, March 26, 2008).
5. Tomas Rokicki, "Twenty-Five Moves Suffice for Rubik's Cube" (ArXiv, March 24, 2008).
6. Tom Rokicki's Personal Web Site.

April 10, 2008

Aviation Biofuel

Green aviation will have many hurdles. Although carbon dioxide emission of fossil fuels can be counterbalanced by carbon sequestration, a grow-your-own fuel source, a biofuel, with sufficient energy density would solve that problem easily. The growing plants will remove as much carbon from the atmosphere as the burning fuel will leave. I reviewed biofuels in several previous articles [1-3]. This year, Richard Branson's Virgin Atlantic Airways had a test flight of a Boeing 747 using a fuel blend with twenty percent biofuel for one of its four engines. The flight was a joint effort by Virgin Atlantic, Imperium Renewables (manufacturer of the blended biofuel), Boeing and General Electric. Rolls Royce is in the wings, teaming with Air New Zealand on a biofuel test flight later this year; and Boeing is teaming with Continental Airlines for another test flight next year.

Aircraft are responsible for about three percent of greenhouse gas emissions. With carbon taxes becoming de rigueur in many countries, carbon emission will start eating into the profits of an already beleaguered aviation industry. Public environmental activism may be a further factor. At least one article [4] warns that the aviation industry may take on a stigma not unlike today's tobacco industry if it doesn't address the fossil fuel problem. As in other industries, conservation helps to some extent. The Boeing 787 Dreamliner, constructed from lightweight composites, has twenty percent less fuel consumption per passenger mile than a conventional airliner. It's estimated that the aviation industry pays $200 million annually for each cent increase in fuel cost [5].

The Virgin Atlantic test used traditional biofuel feedstocks of coconut oil and babassu (Attalea speciosa, a wild palm) oil. Although these are far removed from the rapidly-growing renewable biofuels under development, these fuels do not contribute to deforestation. The Air New Zealand test later this year will use a biofuel closer to the mark. This biofuel will be made from algae, grown in salt water, and jatropha, which grows in marginal land not suitable for other agriculture. The source of the biofuel is important. Boeing has calculated that fueling all the world's airliners on soybean biofuel would require cultivation of a landmass the size of Europe [4]. Algae would require an area about the size of Belgium. Bill Glover, head of Boeing's Commercial Environmental Strategy says that a biofuel-powered aircraft is just five years away, and a major push towards biofuel fleets will be underway in just ten years.

1. Cellulosic Ethanol, February 20, 2008.
2. Green, Green, Flying Machine, August 16, 2007
3. Green, Green, Flying Machine (Part II), August 17, 2007
4. Patrick Mazza, "Taking Aloft With Sustainable Biojet" (WorldChanging Team, April 7, 2008)
5. Matthew L. Wald, "A Cleaner, Leaner Jet Age Has Arrived" (New York Times, April 9, 2008).

April 09, 2008

Morning Coffee

As you can see from the times of my blog postings, I'm usually the first one to arrive at the laboratory in the morning, so it's my responsibility to make the first pot of coffee. The mathematician, Alfréd Rényi, said that a mathematician is "a machine for turning coffee into theorems." Coffee is so important to mathematics that this quotation is usually attributed to his more famous colleague, Paul Erdös, who drank copious quantities, and whom I eulogized in a previous article (The Erdös Number, August 17, 2006). The stimulative effect of coffee is primarily from its caffeine content (C8H10N4O, 1,3,7-trimethyl-1H-purine-2,6(3H,7H)-dione), which varies according to coffee type and preparation, as follows (per ounce, approximate)[1]:

• Espresso: 30-40 mg
• Drip coffee: 16-25 mg
• Instant: 10-15 mg
• Decaf, brewed: 0-0.1 mg

Of course, people drink coffee not just for its value as a stimulant, but also for the taste. Not surprisingly, analytical chemists have been studying coffee for some time. More than a thousand volatile organic compounds have been detected in coffee, fifty of which are considered to be important to coffee's aroma and taste. A recent article in the journal, Analytical Chemistry, reported on the results of a comparative study of the data from proton-transfer reaction mass spectrometric analysis (PTR-MS) of the headspace above brewed espresso coffee with the assessment of a human taste panel [2]. The chemists were able to develop a predictive model of espresso taste based on sixteen characteristic features of the PTR-MS data. A free online version of the paper is available [3].

1. Coffee (Wikipedia).
2. Christian Lindinger, David Labbe, Philippe Pollien, Andreas Rytz, Marcel A. Juillerat, Chahan Yeretzian, and Imre Blank, "When Machine Tastes Coffee: Instrumental Approach To Predict the Sensory Profile of Espresso Coffee," Anal. Chem. (January 26, 2008).
3. PDF File of Ref. 2.

April 08, 2008

Fine Structure Constant

Colleagues who are familiar with my work know that I enjoy simple experiments. Some of my best experiments have involved short lengths of PVC tubing from The Home Depot. In this simplicity, I follow in the tradition of the physicist, Ernest B. Rutherford, who was awarded the 1908 Nobel Prize in Chemistry for his "investigations into the disintegration of the elements, and the chemistry of radioactive substances." Rutherford was famous for constructing his experiments with "string and sealing wax." Richard Reeves writes in a recent biography of Rutherford [1] that when one of his students needed a metal tube for an experiment, Rutherford got it by cutting a piece from the handlebar of an old bicycle. A very simple experiment on the measurement of the fine structure constant is soon to be published in Science. This experiment shows, in effect, that you can measure the fine structure constant with your naked eye, provided that it's well calibrated.

I mentioned the fine structure constant (commonly called α) in two previous articles [2-3]. The fine structure constant is interesting to physicists, since it is a dimensionless number approximately equal to 1/137. If the fine structure constant were even slightly different from its value, life would not exist. To underscore its importance to the physical nature of the universe, the fine structure constant can be expressed in terms of several other important physical constants

α = e2 / 2εohc

Where e is the electrical charge, h is Planck's constant, c is the speed of light, and εo is the permittivity of free space.

Graphene, atomically thin sheets of graphitic carbon getting much present attention, was the topic of two previous articles [4-5]. Physicists from the University of Manchester (England) and the University of Minho (Portugal) have found a way to measure the fine structure constant using the optical transmittance of graphene. A single graphene layer absorbs quite a lot of visible light for its size, 2.3%, and theoretical calculations show that this absorbance is directly related to the fine structure constant. The theory relates the absorbance to the ideal case of electrons confined to two dimensions, something closely approximated by the graphene sheets. The absorbance is nπα, where n is the number of graphene layers. The accuracy of this measurement is a few percent, so this method is not suitable for obtaining accurate values for α, which is known to twelve digits. A preprint of the paper is available on the internet [6].

Rutherford was known also for his ability to explain his science in understandable terms. Said Rutherford, "If you can't explain your physics to a barmaid, it is probably not very good physics." [7]

1. Richard Reeves, "A Force of Nature: The Frontier Genius of Ernest Rutherford" (W. W. Norton, December 3, 2007, ISBN-13:978-0393057508).
2. Mathematics, Physics, and Reality (January 10, 2007).
3. Von Klitzing Constant (April 11, 2007).
4. Graphene Electronics (March 9, 2007).
5. Atomically Thin Drum Heads (February 27, 2007).
6. R.R. Nair, P. Blake, A.N. Grigorenko, K.S. Novoselov, T.J. Booth, T. Stauber, N.M.R. Peres and A.K. Geim, "Universal Dynamic Conductivity and Quantized Visible Opacity of Suspended Graphene".
7. Quotations of Ernest Rutherford (Wikiquote).

April 07, 2008

Traffic Modeling

When I hear the word, "traffic," I first think of the rock group, Traffic, which launched the career of Steve Winwood. Then, of course, my mind turns to the less pleasant automobile traffic we all face on our daily commutes to/from work. Scientists are members of the working class, and a long commute gives them time to think. One thing they think about is the nature of traffic, and quite a few papers have been written on traffic and related topics. The first paper I read on the topic of commuting is the 1983 paper, "A Fair Carpool Scheduling Algorithm," by Ronald Fagin and John H. Williams of IBM [1]. This paper is a classic, and it's been referenced often.

A more recent paper in the New Journal of Physics by scientists from a plethora of Japanese universities presents a model of traffic flow in which vehicles are considered to be an ensemble of identical particles [2-3]. The model reveals that traffic jams will occur when the average vehicle density exceeds a certain critical value; that is, a phase transition occurs. I've noticed the complementary phenomenon on school holidays, when low vehicle density allows an easy commute. In their model, a circular roadway was populated with vehicles moving at the same speed. Fluctuations in vehicle density were added, and it was found that even tiny fluctuations grew to upset the homogeneous traffic flow. The model indicates that traffic jams are not the result of bottlenecks, such as closure of a traffic lane for construction or accidents; rather, the bottleneck sparks an increase in vehicle density that leads to a jamming phase transition. Surprisingly, this was not modeled on a computer. Data was taken for a cohort of 22 vehicles traveling about 30 km/hr on a circular roadway of 230 meters diameter. fluctuations were produced by a particular driver changing his speed. This work suggests that limiting access to roadways by metering access from crossroads will enhance commuting speed.

Another paper, published in the not-that-familiar journal, "Planning for Higher Education," looked specifically at traffic congestion around Kent State University [4]. University traffic is interesting, since it occurs at times different from peak-hour commuting traffic, but a large number of vehicles is involved. Also, many large universities are in towns with few residents, so university traffic is the principal traffic component. Kent State, for example, has 25,000 students, roughly the same number as the population of its host city, Kent, Ohio. In a rather obvious conclusion, the authors state that improvements in traffic flow would require either an increase in capacity or a reduction in demand. They suggest that reduced demand can be accomplished by parking management, class scheduling, and encouraging non-automobile modes of transportation, such as the bicycle. Perhaps my elitist scientist personality is showing, but this paper, written by a professor of geography and a manager of transportation services, doesn't give as many insights into traffic flow as the scientific paper from Japan.

When we were engaged in the Design for Six Sigma experience, I became an expert in the Monte Carlo method, which is a very powerful analysis tool. One thing I did using Monte Carlo was a simple model of my morning commute, which involves four highways with different speed limits and travel lengths, seven traffic signals, and one heavy merge. Applying appropriate distribution type, mean and standard deviation to each event, and running ten thousand trials, showed that my mean commute time is 38.7 minutes with a standard deviation of 4.6 minutes. I can get to work in a little more than half an hour at the best, or about three-quarters of an hour at the worst, a finding that's been experimentally confirmed.

1. R. Fagin and J. H. Williams, "A Fair Carpool Scheduling Algorithm," IBM Journal of Research and Development, vol. 27, no. 2 (1983), pp. 133 ff.
2. Joe Winters, "An accident? Construction work? A bottleneck? No, just too much traffic" (Institute of Physics Press Release, March 4, 2008).
3. Yuki Sugiyama, Minoru Fukui, Macoto Kikuchi, Katsuya Hasebe, Akihiro Nakayama, Katsuhiro Nishinari, Shin-ichi Tadaki and Satoshi Yukawa, "Traffic jams without bottlenecks - experimental evidence for the physical mechanism of the formation of a jam," New Journal of Physics, vol. 10, no. 3 (March 1, 2008), pp. 033001 ff.
4. Melissa Edler, "New study examines traffic congestion on a university campus" (Kent State University Press Release, March 25, 2008).

April 04, 2008

Baseball Statistics

Now that spring is here, can the boys of summer be far behind? "Boys of summer" is a colloquial expression for baseball players. Baseball is very popular in the US, although it's not as popular as soccer (a.k.a., football) in the rest of the world. I compared baseball players' salaries with the monetary portion of the Nobel Prize in a previous article (And the Winner is..., October 10, 2007). At least by that metric, ball players are well loved.

Statisticians who are baseball fans enjoy the game both inside and outside the baseball park. Bruce Bukiet of the New Jersey Institute of Technology, a mathematician who usually develops mathematical models for detonation waves, has made baseball a principal research topic. Bukiet uses baseball math to engage science teachers whom he mentors as part of a National Science Foundation project for disadvantaged schools. His baseball "picks" interest many others because his model for recommending wagers on the outcome of ball games has won more money than it's lost in the last seven years [1]. Bukiet's model predicts the number of games each major league baseball team will win. He's a Mets fan, but he doesn't let personal preference cloud his results.

Bukiet predicts that "The New York Yankees, Boston Red Sox, Detroit Tigers and Los Angeles Angels should make the playoffs in the American League in 2008, with the other teams lagging well behind... The National League should see much tighter races, with the New York Mets and Atlanta Braves winning the East and the wild card, respectively, while in the Central and West Divisions only the Pittsburgh Pirates and the San Francisco Giants have no real shot of making it to the post-season." Perhaps we'll see another Yankees-Braves World Series this year. More detailed results are found in Ref. 1.

In another application of mathematics to baseball, Samuel Arbesman, a graduate student at Cornell University, and Steven Strogatz, a professor of applied mathematics, also at Cornell, looked at Joe DiMaggio's famous batting streak [2]. DiMaggio had a hit in 56 consecutive games in 1941. Arbesman and Strogatz weren't just interested in DiMaggio. They ran their analysis for every baseball player from 1871 to the present. They did a very simple monte carlo analysis based on a player's likelihood of getting a hit in any single game in a given year. For example, Dimaggio had an 81% likelihood of getting a hit in any single game in 1941. Note that the probability of his having a 56 game hitting streak is not (0.81)56, since the streak could start at any time in the season, and there were 154 games in that season. Arbesman and Strogatz found that streaks such as DiMaggio's would not have been that unlikely, especially in the early years of baseball, when it was a batter's game. DiMaggio was the lucky player whose real universe matched one of his particular monte carlo universes in 1941. There are some, however, who believe that DiMaggio had help from the official scorer, who logged two questionable at-bats in the the 30th and 31st games as hits, rather than errors [3]. In one play, the shortstop missed the ball, which bounced off his shoulder; in the other, the shortstop (same guy, LOL!) dropped the ball. DiMaggio's streak could have ended at 29 games. In any case, DiMaggio was still a phenomenal ball player.

The Boys of Summer is the title of a song by Don Henley, although Henley has stated that his song is not about baseball.

1. Sheryl Weinstein, "Mathematician foresees romps for Major League Baseball's American League in 2008" (New Jersey Institute of Technology Press Release, March 31, 2008).
2. Samuel Abbesman and Steven Strogatz, "A Journey to Baseball's Alternate Universe," New York Times Op-Ed, March 30, 2008.
3. John Allen Paulos, "Does Joe DiMaggio's Streak Deserve an Asterisk?" (ABC News, October 7, 2007).
4. Bruce Bukiet's Home Page.

April 03, 2008

Aluminum (née Aluminium)

April marks the anniversary of the patenting by Charles Martin Hall of what is today called the Hall-Héroult process for production of aluminum [1]. Although this process was patented by Hall in April, 1886, it was discovered independently by the Frenchman, Paul Héroult. Aluminum salts cannot be reduced electrolytically in aqueous solutions, since aluminum would react with water to produce aluminum hydroxide and hydrogen gas. For this reason, production of aluminum was difficult, and aluminum was a rare metal before Hall. Aluminum was first isolated in 1827 by Friedrich Wöhler, who reacted anhydrous aluminium chloride with potassium, although an impure aluminum was produced by the Danish physicist, Hans Christian Oersted (of magnetism fame), two years earlier. Aluminum was so rare that it was priced about the same as silver when it was selected as the material for the apex cap of the Washington Monument in 1884.

As I wrote in a previous article (Greatest Material Events of All Time, April 10, 2007), the Hall-Héroult process is considered by materials scientists to be one of the most important events in the history of materials [2]. The process involves a lot of materials science. Aluminum oxide (alumina, Al2O3) is dissolved in molten cryolite (Na3AlF6). Although Al2O3 has a melting point above 2000 oC, it dissolves in cryolite to form a lower melting point material, and AlF3 is added to the mixture to further reduce the melting point. This molten mixture is electrolyzed using a carbon cathode (typically, the carbon container holding the molten salt) and a carbon anode. The reaction is as follows:

2Al2O3 + 3C -> 4Al + 3CO2

It can be seen from the reaction that the anode is sacrificed to become CO2. Since aluminum has a low melting point (660 oC) and a higher density than the molten salt, the liquid aluminum deposited at the cathode will sink to the bottom of the container, where it's collected. The process is extremely simple, and its only drawback is its need for large amounts of electrical power, about eight kilowatt-hours per pound of aluminum produced. There's also the energy needed to fuse the salt and keep it molten. Hall opened his first plant for aluminum production in Pittsburgh just two years after issuance of the patent, and his process reduced the price of aluminum by a factor of 200. Hall's company eventually became Alcoa.

Aluminum by any other name would still be aluminium. Americans call this element aluminum, while most of the world knows it as aluminium. Hall apparently misspelled aluminium on some promotional literature. Since aluminium was uncommon before Hall, Hall's aluminum spelling became prominent, and it's used even today.

1. C. M. Hall, "Process of reducing aluminium from its fluoride salts by electrolysis," US Patent 400,664, April 1886.
2. List of Great Materials Moments (The Materials Society).
3. Randy Alfred, "April 2, 1889: Aluminum Process Foils Steep Prices" (Wired News, April 2, 2008).

April 02, 2008

Playing the Market

When I was an undergraduate student, one of my physics teachers told me that all physics was just Newton's equation, F = ma. You just needed to know what F, m and a were at any given time. What he was saying is that, whereas the ideas of physics are simple, their application is often difficult. This principle applies to the stock market. As the joke goes, the stock market is easy. You just buy low and sell high. The trick, of course, is knowing what's low and what's high at any given time.

There are physicists, called "Quants," who model financial systems like the stock market for a living. Prediction of money matters like the stock market used to be the realm of the economist, but history has told us that economists are able to explain things easily after they've happened, but not before, and only in general terms. My undergraduate economics textbook, Economics by Paul A. Samuelson (Sixth Edition, Mcgraw-Hill Book Company, 1964), had only a single differential equation. The mathematician, Stanislaw Ulam, once challenged Samuelson to name one nontrivial theory in the social sciences, of which traditional economics is a part. Samuelson responded with the theory of comparative advantage, something that's been around since 1817. Apparently, he couldn't think of anything recent, at least at the time the question was asked.

There are a few milestones in the success of mathematical analysis of financial markets. The 1952 Ph.D. thesis, "Portfolio Selection," by Harry Markowitz is one, since it was the first major publication that used advanced mathematics in financial analysis, especially in the quantification of diversification. Stochastic calculus was introduced to financial analysis by Robert Merton in 1969. As the word, "stochastic," suggests, Merton had essentially given up on the idea of cause and effect and admitted that the market might as well be random. The 1997 Nobel Prize in Economics was awarded to Merton; and to Myron Scholes, who developed the famous (or notorious, as the case may be) Black-Scholes option pricing formula. Many more examples of mathematical finance are found in the references [1].

Now, back to our central problem of identifying what's a low and what's high. Stock prices can be modeled using a stochastic oscillator, a model developed by George C. Lane in the 1950s. Once again, the stochastic nature of markets is highlighted. The stochastic oscillator is a way to compare the closing price of a stock with its recent history, the idea being that closing prices consistently near the top of a range indicate buying pressure, and closing prices near the bottom of a range indicate selling pressure. Investors who believe in the stochastic oscillator model will buy when there's buying pressure (a bull market) and sell when there's selling pressure (a bear market). In any case, an investor needs to be cautious. The mathematics may be precise, but the interpretation is a matter of faith.

1. Mathematical Finance Page at Wikipedia.
2. Stochastic Oscillator page at StockCharts.com.

April 01, 2008

Flexible Circuits

The mechanical properties of objects depend as much on their geometry as their materials of construction. One simple example is the contrast between a toothpick and a sheet of paper. Both are made from wood, but the texture of the cellulose fibers and smaller thickness of the paper allows it to be less stiff and more flexible. The formal way to state this is that stiffness is an extensive property of an object. Stiffness depends on an intensive material property, the elastic modulus, which is fixed once you choose the material. At that point, you need to adjust the geometry of the component for a desired stiffness. Just as in the case of the toothpick and sheet of paper, making a thin sheet or thin fiber of any material (e.g., aluminum foil) leads to flexibility. Glass itself is a stiff, brittle material, but forming glass into a fiber allows it to be bent easily along one of its dimensions.

This same idea of managing flexibility by geometry has been applied to the workhorse of electronic circuitry, silicon, to make flexible circuits [1-2]. A multi-national research team with members from the University of Illinois at Urbana-Champaign, Northwestern University, Sungkyunkwan University (Korea), and the Institute of High Performance Computing (Singapore) has published a paper describing high performance, stretchable, and foldable integrated circuits of silicon [3]. The team, led by John A. Rogers of the Departments of Materials Science and Engineering, Beckman Institute, and the Frederick Seitz Materials Research Laboratory of the University of Illinois at Urbana-Champaign, accomplished this in a laminate structure of thin silicon with ultrathin plastic and elastomer sheets.

Rogers, who has been working for many years on novel methods of fabricating silicon circuits, first published this idea in 2005 [4]. An important feature of this laminate is its inherently wave-like, buckled structure which allows reversible stretching in one direction. This stretching does not alter the electrical properties of the silicon enough to disturb the functionality of the electronic circuitry. The latest work indicates that this process can be scaled to produce circuitry of arbitrary complexity. In other processes, the relative positioning of the silicon in the layers is crucial to their bendability. When a sheet is bent, there is a compressive strain developed on one side, and a tensile strain on the other. The usual trick is to place the most brittle material, in this case the silicon, in the middle in a position called the neutral mechanical plane. This plane has zero strain, and the material can tolerate a tighter bending radius if it's extremely thin and placed in this plane. Roger's laminate is somewhat different, since it uses a wave-like structure of alternating compression and tension in one direction of the sheet to allow bending.

To produce their bendable electronics, Roger's team first applies a sacrificial polymer layer to a rigid substrate that's used as a carrier. The sacrificial layer allows subsequent layers to be removed. Next, they deposit a thin plastic coating, and then bond the silicon components, fabricated from nano-ribbons of silicon by conventional processing techniques, to it. The sacrificial layer is removed, and the plastic-silicon composite is bonded to a prestrained sheet of silicone rubber. When the strain is relieved, the lane buckles into a wave-like pattern of alternating compression and tension. It's this pattern that allows bending in one direction of the sheet.

1. Jonathan Fildes, "Silicon chips stretch into shape" (BBC News, March 27, 2008).
2. James E. Kloeppel, "Foldable and stretchable, silicon circuits conform to many shapes" (Press Release, UIUC, March 27, 2008).
3. Dae-Hyeong Kim, Jong-Hyun Ahn, Won Mook Choi, Hoon-Sik Kim, Tae-Ho Kim, Jizhou Song, Yonggang Y. Huang, Zhuangjian Liu, Chun Lu, John A. Rogers, "Stretchable and Foldable Silicon Integrated Circuits," Science Online, March 27, 2008.
4. E. Menard, R.G. Nuzzo and J.A. Rogers, "Bendable Single Crystal Silicon Thin Film Transistors Formed by Printing on Plastic Substrates," Applied Physics Letters vol. 86, no. 9 (2005), 093507.