« February 2007 | Main | April 2007 »

March 30, 2007

Splish, Splash

There's a lot of physics in the "margins;" namely, the areas where no one has checked because things are "obvious." If you throw a ball into water, you expect a splash. After all, there's the sudden displacement of water, and it's easier for the water to penetrate air than to displace another volume of water. You would expect that the material of the ball would have very little effect on the size of the splash, if any at all. A team of French scientists decided to investigate whether the surface condition of the ball really mattered, and they were able to explicate this previously unexplored margin of physics.

Cyril Duez, Christophe Ybert, and Lydéic Bocquet of the Université, Lyon, along with Christophe Clanet of CNRS, Marseille, dropped spheres of aluminum, steel and glass with diameters between 7 mm and one inch from a 1.25 meter height into water [1]. The spheres were chemically treated to produce a range between extremely hydrophilic and hydrophobic. The most hydrophilic surface was obtained by treatment of glass in a hydrogen peroxide/sulfuric acid mixture ("piranha solution"), followed by a deionized water rinse and heating to 110oC. Hydrophobic surfaces were obtained by producing silane chains at the surface by various treatments, depending on the ball material.

Duez, et al., found that the hydrophobic surfaces generate a larger splash. Their theory is that a hydrophobic surface creates an air cavity at the ball-water interface that allows extra movement of water to create a splash. Hydrophilic surfaces allow the water to flow smoothly around the ball so the air space does not form. There is an historical reference to their work. More than a hundred years ago, A. M. Worthington and R. S. Cole used high-speed photography to study impact of solid spheres on water [2]. They observed a range of splashing depending on whether the ball surface was "rough" or "smooth," but they didn't speculate on a cause. Of course, laboratory instrumentation was not well developed a hundred years ago, so they can be excused for publishing only qualitative information. Duez, et al., used high-speed photography, as did Worthington and Cole, but they recorded the sound, also. Hydrophilic surfaces gave a weak "plop," but the hydrophobic surfaces gave loud splashes.

Bobby Darin (Walden Robert Cassotto, 1936-1973) recorded the song, Splish Splash in 1958. Darin was said to have had a genius IQ, and he attended the Bronx High School of Science. Perhaps you can see the interplay between genius and madness in these lyrics [3].

Splish splash, I was takin' a bath
Long about a Saturday night
A rub-a-dub, just relaxin' in the tub
Thinking everything was all right

1. Cyril Duez, et al., "Making a splash with water repellency," Nature Physics vol. 3 (March 2007) pp. 180-183. Available as a PDF file here.
2. A. M. Worthington and R. S. Cole, "Impact with a liquid surface studied by the aid of instantaneous photography II," Phil. Trans., vol. A 194 (R. Soc., London, 1900), pp. 175-199.
3. Splish Splash (Darin-Murray, Atco 6117, 1958),
4. Katharine Sanderson, "Small plops, big splashes," (Nature Online - Subscription Required).
5. Gaia Vince, "Nano-coating makes for an awesome splash" (New Scientist, 25 February 2007).

March 29, 2007

Fuel Tank Inerting

Honeywell produces polychloro-trifluoroethylene film used in pharmaceutical packaging. The material is called Aclar®, a name derived from the "lar" ending of other film materials, such as Mylar, and "AC" for added chlorine (I would have bet the AC was for Allied Chemical). Aclar® is chemically resistant, and it has a low water permeability, so it's an excellent material for blister-pack drugs. It is deficient in one respect - it's somewhat permeable to oxygen - and there are drugs that are oxygen sensitive. The solution is to laminate Aclar® with an oxygen barrier film. As if to validate the tenet, "There are no problems, only opportunities," some polymer films [1,2] with greater oxygen permeability than Aclar® are being put to good use to separate oxygen from nitrogen in air and provide an inert atmosphere for aircraft fuel tanks.

Filling the airspace of fuel tanks with an inert gas, a process known as fuel tank inerting, has been common for military aircraft since World War II, and there are as many techniques for this as there are models of aircraft. The use of this technique was considered too expensive for commercial aircraft, but this viewpoint changed after the loss of TWA Flight 800 in 1996. The investigation of that accident concluded that electrical sparks caused an explosion of the center wing fuel tank, and the US Federal Aviation Administration has proposed fuel tank inerting for all large passenger aircraft, although in this case "inert" means less than 12% oxygen (air has 21% oxygen). The front-runner technology for this is oxygen permeable membranes, which have been used principally to oxygen-enrich breathing air.

What makes a membrane material oxygen permeable? The functional parameters here are the oxygen solubility and diffusivity of the material and the differential partial pressure of oxygen across the membrane. In the fuel tank inerting application, the differential solubility and diffusivity of nitrogen and oxygen are the important factors. All materials solubilize gas to some extent (a bane to much metallurgy). The solubility of a gas in water, for example, is summarized by Henry's Law

c = p/k

where c is the concentration of the gas in water, p is the partial pressure of the gas, and k is a temperature dependent constant with units liter-atm/mole. Nitrogen (k = 1639.34 liter-atm/mole at 298 K) is less soluble in water than oxygen (k = 769.2 liter-atm/mole at 298 K), so a cyclic process could be devised to separate oxygen and nitrogen using water.

For glassy polymers, the permeability will generally be a function of the diffusivity of the gas molecule, and the differential diffusivity will depend principally on the size of the molecule. In a crude model, we can sum the van der waals radius of oxygen atoms (152 pm) and the length of the oxygen-oxygen bond (121 pm) to get an approximate size of 425 pm for the O2 molecule (in the "long" dimension); and likewise for nitrogen (atomic radius = 155 pm, bond length = 129 pm, and N2 size = 439 pm). You can see that oxygen has the advantage of a smaller size, so it will diffuse more quickly. The actual differential diffusivity will depend on the free volume of the polymer, the spatial distribution of the polymer chains, and the thermal motion of the polymer segments (which opens gaps through which the gas molecules can travel).

Some inorganic materials, notably perovskites such as La1-xSrxGaO3, are used as electrolytes in solid oxide fuel cells, since they allow transport of oxygen at the cathode. These function only at high temperatures (800 - 1000 oC). Oxygen in this case is transported as an ion, and not as the O2 molecule.

There are other uses for oxygen/nitrogen separating membranes. Argonne National Laboratory has been developing this technology for gasoline and diesel engines [3] to reduce NOx emissions.

1. Hiroyuki Nishide, Yukihiro Tsukahara, and Eishun Tsuchida, "Highly Selective Oxygen Permeation through a Poly(vinylidene dichloride)-Cobalt Porphyrin Membrane: Hopping Transport of Oxygen via the Fixed Cobalt Porphyrin Carrier," J. Phys. Chem. B, vol. 102 (1998), pp. 8766 -8770.
2. R.-C. Ruaan, S.-H. Chen and J.-Y. Lai, "Oxygen/nitrogen separation by polycarbonate/Co(SalPr) complex membranes," Journal of membrane science, vol. 135 (1997), no1, pp. 9-18.
3. Variable Oxygen- or Nitrogen-Enriched Air System for Combustion Engines (Argonne National Laboratory).
4. Interting systems (Wikipedia).
5. Gas separation (Wikipedia).

March 28, 2007

Diatom Silicon

About twenty-five years ago I read a letter written to the editor of a forgotten journal. The author included a micrograph of a very small, but perfectly patterned, chip that had been collected from an airborne dust collection experiment. The chip, he said, was obviously man-made. What was it, and how did it get airborne? The answer was that it was not man-made at all - it was a piece of diatom shell [1]. Diatom shells are beautifully patterned objects with extremely small feature size. Technology is struggling now to produce artifacts with nanoscale features, but nature has been doing this for hundreds of millions of years. I'm reminded of a helium refrigerator I worked with many years ago. It used leather gaskets, since this natural material performed better under the extreme temperature conditions than any man-made material. Can we make better use of what nature gives us?

A very large group from the School of Materials Science and Engineering at the Georgia Institute of Technology has published [2,3] a description of their research into transforming diatoms from their original silica material into elemental silicon while preserving their shape. Silicon, of course, is the premier material of electronic devices. Silicon crystals are presently produced via the Czochralski technique using a molten silicon melt at about 1414 oC (2577 oF). These crystals are then sliced into wafers for circuit fabrication by photolithography. The Georgia Tech study used a low temperature replacement reaction involving gaseous magnesium to produce silicon from the diatom silica (SiO2) with a magnesia (MgO) byproduct

2Mg(g) + SiO2(s) => 2MgO(s) + Si(s)

The process occurs at 650 oC and the reduced silica-magnesia structure maintains its shape. Subsequent immersion in one molar HCl solution for four hours leaves the shaped silicon intact while dissolving the magnesia. An important feature of the resultant structures is high specific surface area that is useful for gas sensing. The Georgia Tech team fabricated a nitrous oxide gas sensor from a converted diatom shell by attaching platinum electrodes and measuring the impedance change upon gas exposure. Nitrous oxide was detected at the ppm level with a response time of less than half a minute. They were able to demonstrate photoluminescence, a property of nanoscale silicon, also.

1. Dale W. Griffin, Christina A. Kellogg, Virginia H. Garrison, Eugene A. Shinn, "The Global Transport of Dust," American Scientist vol. 90, no. 3 (May-June 2002).
2. David J. Norris, "Materials science: Silicon life forms," Nature vol. 446, no. 7132 (8 March 2007), p. 146.
3. Zhihao Bao, Michael R. Weatherspoon, Samuel Shian, Ye Cai, Phillip D. Graham, Shawn M. Allan, Gul Ahmad, Matthew B. Dickerson, Benjamin C. Church, Zhitao Kang, Harry W. Abernathy III, Christopher J. Summers, Meilin Liu and Kenneth H. Sandhage, "Chemical reduction of three-dimensional silica micro-assemblies into microporous silicon replicas," Nature vol. 446, no. 7132 (8 March 2007), pp. 172-175.

March 27, 2007

Twenty Years after Woodstock

It all started with a single paper about a ceramic compound of lanthanum, barium, copper and oxygen (La2-xBaxCuO4) [1]. It led to the 1987 Nobel Prize in Physics for its discoverers, Georg Bednorz and Karl Müller, and many sleepless nights for solid state physicists and materials scientists. The reason for all this activity was one particular property of this material - it was a perfect conductor of electricity below a temperature of 35 Kelvin (-238.15 oC, or -396.67 oF); that is, a superconductor. What made this significant is that the record high temperatures for superconductors had held at 23 Kelvin for many decades.

I had been involved in research on superconductors in the late 1970s, and we tried a few tricks to enhance this temperature limit [1], but most such research was an exercise in futility. There was a successful theory of superconductivity, called the BCS theory after it's authors, but it offered no guidance as to what mix of elements you could use to obtain higher operational temperatures. Georg Bednorz and Karl Müller, working at IBM Zürich Research Laboratory, synthesized La2-xBaxCuO4, and they were puzzled by the fact that it was black. Ceramic oxides are generally white, the idea being that there are no free electrons available to absorb energy from light. Conducting ceramics, however, are black, since some of their electrons are not bonding electrons, so they can absorb light energy. Their further studies at low temperature indicated superconductivity, and they published their observations in 1986 [2].

Since it had been almost a decade since I had worked in superconductivity, I didn't notice the Bednorz and Müller paper. However, a subsequent paper [3] in Physical Review Letters by a group at the University of Houston, Texas, caught everyone's attention, and many scientists decided to try their hand at these new materials. I tried growing single crystals of this material, but synthesized some nice crystals of copper oxide, instead. All this activity culminated in an impromptu session on these materials at the March, 1987, meeting of the American Physical Society in New York City. I didn't attend this meeting, but one attendee told me that the session was not just "Standing Room Only" - there were people standing many rows thick outside the doorway. The room's legal occupancy was about 2,000, and television monitors were set up in the hallways for the overflow crowd. This marathon session, which started at 7:30 PM and ended at about 3:00 AM, has been called the Woodstock of Physics after the Woodstock Music and Art Festival that witnessed a similar crush of humanity.

Eventually, similar materials were synthesized with operating temperatures above 100 Kelvin, and there have been some unsubstantiated reports of superconductivity near room temperature. There exist presently many materials that show superconductivity above the temperature of liquid nitrogen (77 Kelvin). However, these ceramic materials are very brittle, and they can't be formed easily into wires, so their application is limited. Research interest in this area has waned, as I reported in a previous posting. Many of the original participants of the 1987 session attended a twentieth anniversary celebration on March 5, 2007, at the APS meeting in Denver, Colorado [4].

1. P. Duffer, D.M. Gualtieri, and V.U.S. Rao, "Pronounced Isotope Effect in the Superconductivity of HfV2 Containing Hydrogen (Deuterium), Phys. Rev. Lett. vol. 37 (1976), pp. 1410-1413.
2. J. G. Bednorz and K. A. Müller, "Possible high Tc superconductivity in the Ba-La-Cu-O system," Z. Physik vol. B 64 (1986), pp. 189-193.
3. C. W. Chu, P. H. Hor, R. L. Meng, L. Gao, Z. J. Huang, and and Y. Q. Wang, "Evidence for superconductivity above 40 K in the La-Ba-Cu-O compound system," Phys. Rev. Lett. vol. 58 (1987), pp. 405-407.
4. Geoff Brunfiel, "Superconductivity two decades on," Nature vol. 446, no. 7132 (8 March 2007), p. 120.

March 26, 2007

The Next Generation

The winners of the 2007 Intel Science Talent Search (STS) were announced this month. For those of us who don't get out of our cublicles that often, the STS is our national science fair. It began with Westinghouse in 1942, and its sponsorship was taken over by Intel in 1998 at the demise of Westinghouse [1]. The contestants are all high school students who are mentored by professional scientists. Nearly 2,000 research reports are received for consideration, from which 300 are awarded special distinction, and forty finalists are selected for a competition in Washington, DC. All finalists receive a $5,000 scholarship, the winner receives a $100,000 scholarship, and nine runners-up take prizes ranging upwards from $20,000. The STS is apparently a good predictor of future success. Wikipedia lists the accomplishments of past winners:

• Six have received Nobel Prizes

• Two have earned the Fields Medal

• Three have been awarded the National Medal of Science

• Ten have won a MacArthur Fellowship

• Fifty-six have been named Sloan Research Fellows

• Thirty have been elected to the National Academy of Sciences

• Five have been elected to the National Academy of Engineering

This year's winning project was a Raman spectrometer built by Mary Masterman, a high school student from Oklahoma City, OK. Mary was able to build a working spectrometer with just $300 in materials. Commercial units can cost more than $50,000. Second place went to John Vincent Pardon from Chapel Hill, NC, who proved a theorem about convex curves. Mathematics was involved also in the third place project by Dmitry Vaintrob from Eugene, OR.

The first Northeast finish was at fourth place. Catherine Schlingheyde, of Oyster Bay, New York, investigated gene-silencing. Rebecca Lynn Kaufman, also representing the Northeast from Croton-on-Hudson, NY, won fifth place for a project on hormonal effects in male schizophrenia.

New Jersey was represented by seventh and eighth place finishes by Megan Marie Blewett of nearby Madison, New Jersey, and Daniel Adam Handlin of Lincroft, New Jersey. Megan investigated proteins involved with multiple sclerosis and amyotrophic lateral sclerosis, and Daniel developed an optical satellite tracker.

Congratulations to the winners! Perhaps US science will survive another decade.

1. Steve Massey, "Who Killed Westinghouse?" (Pittsburgh Post-Gazette).
2. Aimee Cunningham, "The Next Generation: Intel Science Talent Search honors high school achievers," Science News, vol. 171, no. 11 (March 17, 2007), p. 166.
3. List of Finalists on the STS Web Site.

March 23, 2007

Ostwald Ripening

My daughter, who's a graduate student in Theology at Villanova University, telephoned last night to say she had passed her comprehensive exams. This reminded me of my comprehensive exams at Syracuse University several decades ago. It also reminded me of Ostwald ripening, as I'll explain.

One of my professors, Klaus Schroeder, had a deep interest in two things; the Ising Model, which was apparently the topic of his thesis, and Ostwald ripening. In every lecture on physical metallurgy, he would mention one, the other, or both. Well, a word to the wise is sufficient, so I studied both these topics extensively before my "comps." During the oral portion of the exams, Klaus asked me what I knew about Ostwald ripening. Of course, I knew everything about Ostwald ripening, and that may have helped to get me a passing grade.

Friedrich Wilhelm Ostwald (1853 - 1932), commonly known as Wilhelm Ostwald, was a German chemist who received the 1909 Nobel Prize in Chemistry. Ostwald is most famous for inventing the Ostwald process for production of nitric acid, and it is Ostwald who invented the term "mole" in 1900 [1], presumably as a contraction of "molecular weight." His published works total about forty thousand pages, although some of these are philosophical, and not scientific.

Simply put, Ostwald ripening, published by Ostwald in 1896, is a chemical version of the principle that the rich get richer and the poor get poorer. In a solid solution containing precipitates, the larger precipitates will grow at the expense of the smaller, principally because the smaller surface area to volume ratio of the larger precipitate particles leads to lower system energy. This process is very important for strengthening of alloys in a process called precipitation hardening. This process happens to ice cream, also, when large ice crystals grow at the expense of smaller crystals, giving older ice cream that unappetizing texture.

Ostwald's theory of precipitate ripening was extended by Yakov Zeldovich, an icon in Russian physics, who introduced the idea of a critical nucleus size. Particles larger than this size will grow, while the others will shrink. Zeldovich contributed to essentially all fields of physics, from the solid state to cosmology and elementary particles. Stephen Hawking once complimented Zeldovich by telling him that he had believed him to be a "collective author," like Nicolas Bourbaki, the pseudonym of a group of early twentieth century mathematicians who published a series of books that attempted to reduce all mathematics to set theory.

1. W. Ostwald, Grundriss der allgemeinen Chemie (Engelmann, Leipzig, 1900) p. 11.
2. "For whosoever hath, to him shall be given, and he shall have more abundance: but whosoever hath not, from him shall be taken away even that he hath." (Matthew 13:12).

March 22, 2007

Ideas and Execution

Larry Bossidy was Chairman and CEO of AlliedSignal from 1991-1999; and then Chairman of Honeywell for a short period when Honeywell was acquired by AlliedSignal. He is an author, with Ram Charan, of "Execution: The Discipline of Getting Things Done" (ISBN 0609610570), published in 2002. I was happy to give my son, a recent MBA from the Katz Graduate School of Business of the University of Pittsburgh, an autographed copy of this book. As can be gleaned from the title, Bossidy is as interested in how things are done, and how progress can be measured, as deciding what should be done[1].

Scientists and engineers work with ideas, and we think that the idea is key, forgetting that execution is an important factor in the success of our ideas within a company. We also avoid reading business books, since they seem to have too many words and not enough numbers. What we would like is a single PowerPoint slide that quantifies the main point. Derek Sivers, president of two internet music companies, has a perspective on ideas and execution that can be presented as a single PowerPoint slide. He scores ideas and execution as follows:

• Awful idea = -1
• Weak idea = 1
• So-so idea = 5
• Good idea = 10
• Great idea = 15
• Brilliant idea = 20

• No execution = $1
• Weak execution = $1000
• So-so execution = $10,000
• Good execution = $100,000
• Great execution = $1,000,000
• Brilliant execution = $10,000,000

The net worth of your idea is a product of the idea score and the execution score. An awful idea will lose money no matter how it is executed. Of course, all our ideas are at least good, so a good execution will lead to $100,000, which is probably just licensing revenues earned from an unused patent. But your once-in-a-career brilliant idea, if brilliantly executed, will lead to a $10 million business.

One business book any engineer will enjoy is "The Power of the 2 x 2 Matrix: Using 2x2 Thinking to Solve Business Problems and Make Better Decisions" by Alex Lowy and Phil Hood (Jossey-Bass, 2004, ISBN: 0787972924).

1. Lessons from the Book - Execution, the Discipline of Getting Things Done (PDF File).
2. Ideas and execution (numericlife.blogspot.com).

March 21, 2007

70 FORMAT(12H John Backus)

It was announced late yesterday that computer pioneer John Backus died on March 17, 2007, at age 82.

John Backus was born on December 3, 1924 in Philadelphia, Pennsylvania. His original career choice was chemistry, which he studied at the University of Virginia, but he left the university to join the US Army, where he had some medical training. Eventually, he thought of becoming a radio technician, but he became interested in mathematics and received a Master's degree from Columbia University in 1949. He joined IBM, worked on some of the first computer programs, and quickly became frustrated with the primitive state of computer programming at the time, which typically involved coding in assembly language. He eventually led the team that developed one of the first high level computer languages, Fortran (then called FORTRAN, since the common computer output devices printed upper-case letters, only). The word "Fortran" was an acronym for "Formula Translating System." Along the way, he co-invented the Backus-Naur Form (usually called BNF), a syntax for context-free grammar; that is, a language to describe languages. One notable feature of the IBM Fortran compiler was that it was an optimizing compiler. The IBM team realized that customers would not use a compiler that didn't produce code that ran as fast as the best assembly language code.

Fortran has evolved over the years, incorporating the best features of other computer languages. It should be remembered that improvements in software co-evolve with computer hardware. The bare-bones computers of yesteryear had programming languages with limited features, since that's all the hardware could handle. Prior to 1966, every manufacturer developed its own Fortran syntax, so there were problems in porting code to other computers. Fortran finally became standardized in 1966, and it's evolved through various versions. Here's a brief summary of Fortran's history.

• FORTRAN 66, the first standard Fortran, as specified by the American Standards Association, now called ANSI.

• FORTRAN 77, the most popular Fortran, as evidenced by the reluctance to change it until 1990.

• Fortran 90, finally "Fortran," instead of "FORTRAN," with lowercase Fortran keywords.

• Fortran 95, considered to be a minor revision of Fortran 90, but notable for the incorporation of IEEE floating-point arithmetic.

• Fortran 2003, noted for object-oriented features, and interoperability with the C programming language.

• There is presently a working group for Fortran 2008, which is intended as a minor upgrade of Fortran 2003.

The title of this article is a format statement for printing "John Backus" in the FORTRAN 66 dialect I learned in the late 1960s. If this line of code gives you a twinge of nostalgia, you can consider yourself to be somewhat of a computer pioneer. If your elbow is slightly misaligned from carrying boxes of 80-column punched cards in your youth, you get to wear eppelettes with gold clusters.

I'll leave you with two Fortran "statements."

• A good FORTRAN programmer can write FORTRAN code in any language.

• A scientist is a type of computer programmer who writes a database application in Fortran.

1. Fortran Code Examples (Wikipedia).

March 20, 2007

Global Cooling

There's an old saying about the weather, "Everyone talks about it, but nobody tries to do anything about it." This may have been a joke in the past, but we've shown that it's really not that hard to cause a global change in climate. We've been doing it accidentally for the last hundred years. It's therefore not surprising that scientists have proposed technological fixes for global warming. Some of these were listed recently by New Scientist [1].

1. Paul Crutzen, who was awarded the Nobel Prize in Chemistry in 1995, proposes injecting millions of tons of sulfur into the atmosphere. This would decrease insolation by about a percent, but it would also increase acid rain.

2. Roger Angel, an astronomer at the University of Arizona, proposes launching trillions of small objects into space to bend some sunlight around the planet and reduce insolation by about two percent.

3. Curtis Struck, an astronomer at Iowa State University wants to put a cloud of dust, possibly from the moon, into orbit around the earth. This would eclipse the sun at a regular interval and reduce insolation by about a percent.

4. The obvious solution - painting the ground white.

Aside from cost, we must consider the unintended consequences of any of these proposals. We've already mentioned the acid rain problem of injecting sulfur into the atmosphere. Things launched into earth orbit will eventually fall to the Earth, causing damage to communications satellites and a danger to astronauts. Painting parts of the ground white may cause thermal gradients, leading to severe weather.

1. Far-out schemes to stop climate change (New Scientist).
2. Editorial (Physics Web, February 2007).

March 19, 2007

Greatest Equations

Everyone likes lists. The advantage of a list is that it's a nice summary of a certain topic. The list is the low-tech precursor to the PowerPoint slide. Several years ago, Robert Crease, a Professor of Philosophy at the State University of New York at Stony Brook, and the historian of the nearby Brookhaven National Laboratory, polled readers of his "Critical Point" column on Physics Web to get an idea of what his readers considered to be the greatest equations of all time. Since Physics Web by its name is physics oriented, and it's maintained by the Institute of Physics, you can expect that the respondents were mostly physicists, which would skew the results. Moreover, Crease received just 120 responses, although some responses were lists and not single equations. The less modest submitted some of their own equations [2]. So, the statistics aren't that good, but we'll happily ignore that fact and present the list. Some equations (e.g., Maxwell's Equations) contain symbols that are not easily rendered on a web page, so you'll need to click on Crease's original list, or my supplied links to see them. You could also remember what they are (No, I didn't remember them all myself, but I came close).

Maxwell's four electromagnetic equations top the list. Since these equations are essentially the unification of two forces of nature (electricity and magnetism), it's easy to see why they're at the top of the list.

Euler's Identity (e = -1)

Newton's Second Law of Motion (F = ma)

The Pythagorean Theorem (a2 + b2 = c2)

Schröedinger's Equation (HΨ = EΨ)

Einstein's Mass-Energy Equivalence (E = mc2). Surprisingly low on the list.

Boltzmann's Entropy Equation ( S = -kb ln Ω), which connected thermodynamic entropy with probability.

• 1 + 1 = 2. This seems easy enough, but it took Whitehead and Russell an entire book to prove.

At the bottom of the list are some specialist formulas (e.g., the Least Action Principle), as well as the circumference of a circle (C = πD), the Ideal Gas Law (PV = nRT), and Planck's Equation (E = hν). I agree with Maxwell's Equations being first on the list, but I would have included the series expansion for π on my own list,

(π/4) = 1 - (1/3) + (1/5) - (1/7) + (1/9) - (1/11) ...

1. Richard Crease, "The greatest equations ever." (Physics Web (October, 2004).
2. James Watson, in his book, "The Double Helix," remarked about his collaborator, Francis Crick, "I have never seen Francis Crick in a modest mood."

March 16, 2007

Nothing New Under The Sun

After seeing the same plot many times over on television and movies, you start to think that there is nothing new under the sun. Reuse of common themes is more likely when your vocabulary is limited, as it is in western music. Aside from a few atonal pieces that use all twelve musical notes equally, music is built from scales of just eight notes. Tempo, of course, adds another dimension, but your choices are from a limited set, as Beatle George Harrison found out, the hard way.

In a posting on Slashdot, computer scientist Susan Elliott Sim of the University of California, Irvine, recalled an idea from the 1999 science fiction novel, "A Deepness in the Sky," by Vernor Vinge. In Vinge's vision of the future, computer code had become so complex, and its composition so automated, that it was impossible for humans to write new programs from scratch. Instead, new software was produced by "programmer archaeologists" who would excavate archives for useful snippets of code relevant to a certain task. These pieces of code would be threaded together to form a useful program. The underlying assumption, of course, is that everything that could be done had already been done. This may not be far from the truth. How many efficient ways can there be to sort data?

Every programmer has practiced software reuse in one way or another. All of us have modified the body of an existing program to make another program, so we've practiced code reuse on a certain level. The use of existing libraries is possibly the most common, and most effective, form of code reuse. The US National Aeronautics and Space Administration (NASA) needs to produce a tremendous amount of error-free code, and it is very interested in code reuse. In 2005, NASA conducted a survey [2] on software reuse practices among Earth scientists. Here are some highlights of the study.

• Up to 85% of a new application can be developed by reusing existing software.

• Major reasons why software is not reused
- Too difficult to understand
- Poorly documented
- Didn't exactly match requirements
- Too complex or difficult to adapt

• How useful code is found
- Personal knowledge from past projects
- Word of mouth, or networking
- Using Google, or a similar search engine

One problem identified was that published code is released as subsystems or complete applications. A programmer would prefer smaller components, such as libraries or algorithms. If code was published in smaller pieces, reuse should increase. So, if you want your software to be reused, comment your code, and code in smaller pieces.

1. No More Coding From Scratch? (slashdot.org).
2. Software Reuse Survey (NASA, 2005).

March 15, 2007

Photon Momentum

The momentum of a massive particle, typically symbolized as p, is its mass times its velocity, or p = mv. How would you determine the momentum of a photon, the quantum of light? It has the highest possible velocity, the speed of light, but it has no mass. Your first thought is that the momentum is zero. After all, anything times zero is zero. Photons do have momentum. Our p = mv definition is only true for massive particles. James Clerk Maxwell used his electromagnetics in 1871 to show that electromagnetic radiation - that is, photons - exerts a force on objects. Maxwell's idea that photons must have momentum was experimentally confirmed in 1899 by Pyotr Lebedev. This radiation pressure is sometimes proposed as an enabler for interplanetary and interstellar travel by solar sailing. With the emergence of Quantum Mechanics, we now have a definitive measure of photon momentum in a vacuum, p = hk/2π, where h is Planck's Constant, and k is the wave vector.

Of course, light behaves differently in a dielectric than in vacuum. Snell's law of light refraction is one simple example. So, the question arose shortly after Lebedev's experiment as to what a photon's momentum is in a dielectric. Is it larger or smaller than the vacuum momentum? Surprisingly, the photon momentum in a dielectric is still unknown after a hundred years. There are two theoretical predictions that differ wildly from each other - the difference is the square of the dielectric constant - and still the issue is unresolved! Hermann Minkowski calculated in 1908 [1] that the photon momentum is larger in a dielectric, and he gave a value nhk/2π, where n is the dielectric constant. His contemporary, Max Abraham, did a calculation in 1909 [2] that gave hk/2πn. Current thinking is that the value you obtain from your experiment depends on the type of experiment [3-4]. The calculations done by Minkowski and Abraham deal with idealized cases that lump the real and imaginary components of the dielectric constant together. Experiments partition the transfer of photon momentum differently between the real and imaginary parts of the refractive index. So, it will take a little more theory, and not just experiment, to see what really happens.

1. H. Minkowski, Nachr. Ges. Wiss. Göttn Math.-Phys. Kl.(1908), p. 53.
2. M. Abraham, Rend. Circ. Matem. Palermo vol. 28 (1909), p. 1.
3. Ulf Leonhardt, "Optics: Momentum in an uncertain light," Nature Vol. 444 No. 7121 (14 December 2006), p. 823.
4. R. Loudon, "Radiation pressure and momentum in dielectrics," Fortschritte der Physik vol. 52, issue 11-12 , pp. 1134-1140.
5. Mark Buchanan, "Minkowski, Abraham and the photon momentum," Nature Physics, vol. 3, no. 2 (February, 2007), p. 73.
6. Berian James, Letter to Nature (Friday, Jan 5 2007).
7. Peter Bowyer, "The momentum of light in media: The Abraham-Minkowski controversy (24 page PDF File).

March 14, 2007

Happy Birthday Albert !

Today is a special day. Not only is it the birthday of Albert Einstein (born March 14, 1879), but it's also Pi Day. The mathematical constant π (pi) is a transcendental, irrational number with a number sequence that does not repeat. Its value is three followed by a decimal point and a never-ending series of decimal digits. The first few digits of pi are

3.14159 26535 89793 23846 26433 83279 50288 41971 69399 37510

Today's date, March 14, can be represented as 3.14, so today has become known as "Pi Day" to the cognoscenti. In a previous post (October 10, 2006), I reported on Akira Haraguchi, who recited 100,000 decimal places of pi from memory. I've memorized only eleven places, which is definitely overkill for all practical calculations. Only fifty digits are required to calculate the circumference of the universe to within a proton's width.

Reciting digits of pi is what many people do on Pi Day. In that respect, it's the nerd's version of Bloomsday, June 16, when the cultured elite recite (though not from memory) James Joyce's Ulysses. The events in the novel all happened to Leopold Bloom on a single day in Dublin in 1904. Of course, pie is consumed in large quantities on March 14, but the purists will wait until 1:59 PM to have their luncheon dessert.

Pi's cachet has extended beyond the nerd realm. Givenchy makes a Pi Cologne for Men, marked with its Greek symbol. Kate Bush sings a song called Pi in which she recites a hundred decimal places of pi, but she omits twenty-two places of the proper sequence. There is also a disturbing movie, Pi (1998, Darren Aronofsky, Director), about one man's obsession with pi.

If you've read this post late, and missed PI Day, you can celebrate one of several Pi Approximation Days

• July 22 (written 22/7 in some date formats. This is a rational approximation of pi)

• November 10 (or November 9 for leap years, the 314th day of the year)

• December 21 (at 1:13 PM. This combines the 355th day of the year with 113, giving a more accurate rational approximation for pi, 355/113. Adjust the date to December 20 for leap years.)

1. Erin McClam, "Pi fans have their day." (AP-Yahoo News)
2. Official Web Site for Pi Day.
3. Computing Pi (Wikipedia).
4. Pi Formulas (Mathworld).

March 13, 2007

Aluminum Hydride

Aluminum sits just below boron in Group 13 [1] of the Periodic Table. They share the same valence (+3), and many of their compounds are similar. There isn't a hundred percent correspondence in their compounds, however, since a boron cation is much smaller (23 pm ionic radius) than an aluminum cation (53.5 pm ionic radius).

There exist several boranes, compounds of boron and hydrogen. Diborane, B2H6, forms quite endothermically (ΔHf = +36 kJ/mol), and is quite stable. Most other boranes contain another element in combination with boron and hydrogen. One such compound, aluminum borohydride (Al(BH4)3) is used as an additive in jet fuels. Aluminum itself has only one pure hydride, AlH3 (CAS Number 7784-21-6). AlH3 can be synthesized only at high temperatures, and it decomposes at 150oC into aluminum and hydrogen. A better known hydride of aluminum is lithium aluminum hydride (LiAlH4) (CAS Number 16853-85-3). LiAlH4 has a very high heat of formation (ΔHf = -116.3 kJ/mol), and it's a strong reducing agent useful in the synthesis of organic compounds. Lithium aluminum hydride reacts violently with water, and it's pyrophoric.

Collaborating scientists at several institutions [2] now have synthesized a new, extremely energetic, compound of aluminum and hydrogen, Al4H6, and have shown the existence of many others [3]. They rapidly vaporized Al metal in a hydrogen atmosphere in a pulsed-arc discharge, a method commonly used for the synthesis of fullerenes. Mass-spectrometric analysis of the products showed two hundred previously unobserved aluminum hydride compounds containing from one to ten hydrogen atoms. All these could be just transiently stable, but it may be possible to synthesize Al4H6 in bulk. Photoelectron spectroscopy of Al4H6 shows a 1.9 eV gap between the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) in the molecule, which suggests some stability. If the bulk compound could be created, the combustion of Al4H6 to yield Al2O3 and H2O would have a heat of reaction of -438 kcal/mol. This is more than two-and-a-half times greater than that of methane.

1. As defined by the International Union of Pure and Applied Chemistry.
2. Johns Hopkins University, University of Konstanz, Virginia Commonwealth University, and the University of Karlsruhe.
3. X. Li, A. Grubisic, S. T. Stokes, J. Cordes, G. F. Ganteför, K. H. Bowen, B. Kiran, M. Willis, P. Jena, R. Burgert, and H. Schnökel, "Unexpected Stability of Al4H6: A Borane Analog?" Science vol. 315, no. 5810 (19 January 2007), pp. 356 - 358.
4. Katerina Busuttil, "New Al hydrides get energetic." (Materials Today, PDF File).

March 12, 2007

Ms. Carbon

When I opened my March, 2007, issue of Physics Today, I was greeted by a full-page photograph of a familiar face. It was a photograph of Mildred Dresselhaus, who is a professor of Physics and Electrical Engineering at Massachusetts Institute of Technology (MIT). Dresselhaus has been a professor at MIT since 1967, and she was named an Institute Professor in 1985. Her photograph was there in an ad from L'Oréal because she had just won a L'Oréal-UNESCO 2007 Award for Women in Science for "her research on solid state materials, including conceptualizing the creation of carbon nanotubes." [1] The tagline in the ad stated that "Science needs women." [2]

Although Dresselhaus is now seventy-four, she is still quite active. She presented an invited paper at a symposium I attended two years ago at the Princeton Institute for the Science and Technology of Materials (PRISM). Not surprisingly, her talk was on carbon nanotubes. She was President of the American Physical Society and President of the American Association for the Advancement of Science (As a member of these two organizations, it was an easy decision for me to vote for her each time). She is also a member of the US National Academy of Sciences. She is one of the most respected contemporary physicists, and she was able to carry on with this career while being mother of four children. As a role model for women scientists, she has promoted increased participation of women in science.

Dresselhaus' own role model was Rosalyn Yalow, who was one of her physics professors at Hunter College. Yalow was awarded the 1977 Nobel Prize in Physiology or Medicine for the development of radioimmunoassay. Before being mentored by Yalow, Dresselhaus never thought that she could become a professional physicist.

Dresselhaus' interest in carbon began in the 1960s while she was at Lincoln Laboratory, a federal laboratory managed by MIT. At Lincoln Labs she investigated the electronic, optical, and magneto-optical properties of graphite. Dresselhaus, collaborating with Ali Javan, who invented the gas laser, used a laser for magneto optics studies of graphite and refined the electron energy levels in graphite. [3] Of course, one of the most interesting features of carbon is its layered structure. It's possible to soak graphite in ionic liquids (e.g., ferric chloride solution) to make a material with metal atoms surrounded by carbon sheets, a process called intercallation. Dresselhaus has studied intercalates for two decades. At a 1991 conference on fullerenes, she was the first to propose the carbon nanotube structure, and her name has been associated with carbon nanotubes since that time. By the way, Mister Mildred Dresselhaus, also known as Gene Dresselhaus, is an eminent theoretical physicist.

1. Physics Today, vol. 60, no. 1 (March, 2007), p. 9.
2. Mars Needs Women was a 1967 made-for-TV movie. Yvonne Craig, a.k.a., Batgirl, played Dr. Bolen, a woman scientist.
3. P.R. Schroeder, M.S. Dresselhaus, and A. Javan, "Location of Electron and Hole Carriers in Graphite from Laser Magnetoreflection Data," Physical Review Letters vol. 20 (1968), pp. 1292ff.
4. Maggie Wittlin, "Dresselhaus Wins L'Oréal-UNESCO For Women in Science Prize." (Seed Magazine).
5. Dresselhaus wins L'Oréal-UNESCO Award (MIT Press Release.
6. AGORA, an interactive forum sponsored by the L'Oréal-UNESCO For Women In Science program.

March 09, 2007

Graphene Electronics

Silicon is the most common semiconductor used in electronics, but it wasn't the first. The first transistor was made from germanium, a neighbor of silicon in the Periodic Table, and transistors were made exclusively from germanium for many years thereafter. Germanium was used in the first transistors because it was easy to purify and make into single crystals. For several years I worked with Ernie Buehler, who produced the germanium single crystal used to make the first transistor while he was at Bell Laboratories [1]. Eventually, silicon replaced germanium since its electronic properties are less sensitive to temperature.

Carbon sits just above germanium and silicon in the periodic table, but its single crystal form, diamond, is not easily doped because the lattice spacing of atoms in the crystal is too small to accommodate other elements. Another allotrope of carbon, graphite, is a semiconductor, but it has a strange layered structure and is one of the softest materials, so it's not useful for making transistors. There has been much research activity on carbon nanotubes, which are cylindrical carbon molecules. They are actually single-atom thick sheets of graphite rolled up into cylinders, so they are a form of graphene, which is a single layer of graphite.

Researchers at The University of Manchester have made a single electron transistor (SET) from a sheet of graphene [2,3]. The sheet was patterned to have a small restriction, somewhat like the gate of a field effect transistor (FET), that can contain a single electron. When an electron is present in the restriction, there is no conduction through the restriction because of a well-know effect called a Coulomb blockade, so the action is not precisely that of an FET. The device acts like a transistor, since a small electric field can shuttle the electron in and out of the restriction, switching the current flow in the device. It's easy to move single electrons, so the SET device is very sensitive and very fast.

Of course, a scientist's work is never done, and this proof-of-concept graphene device needs further development. One fundamental problem is that the graphene sheet is not terminated like a carbon nanotube, so there are dangling bonds at the edges that can cause problems. This problem is lessened when the graphene sheet is large, but this goes against the idea of having a very small transistor. If microelectronic trends continue, transistors will be 20 nanometer in size by 2020, and this seems to be the limit for silicon transistors. There's still a lot of time until 2020, so maybe graphene can be made into the next generation of electronic circuitry by that time.

1. G. K. Teal, M. Sparks, and E. Buehler, "Growth of Germanium Single Crystals Containing p-n Junctions," Phys. Rev. 81 (1951), pp. 637 - 637.
2. Katharine Sanderson, "Graphene steps up to silicon's challenge" (Nature, subscription required)
3. A. K. Geim1 and K. S. Novoselov, "The rise of graphene," Nature Materials vol. 6 (2007), pp. 183-191.

March 08, 2007

Bring on The Noise

New York (both city and state) is a near neighbor of our Morristown, New Jersey, location. We are dominated by the New York City media. All our broadcast television stations are New York centric, so many commercial messages are directed at a New York audience. Just last week, I saw a public service announcement about NYQUITS, a state program to help New Yorkers stop smoking. Transposing two letters in "NYQUITS" gives "NYQUIST," and that reminded me of Harry Nyquist, a physicist famous for his work in information theory.

Harry Nyquist (1889-1976) was born in Sweden, but he emigrated to the US in 1907, receiving a Ph.D. in physics from Yale University in 1917. He accepted a position with the Department of Development and Research of the American Telephone and Telegraph Company (AT&T), continued there when it became Bell Telephone Laboratories in 1934, and remained there until his retirement in 1954. His contributions to communications technology include characterization of thermal noise in electrical circuits [1] and the bandwidth requirements for communication at a given rate [2]. Unfortunately for Nyquist, the thermal noise is more commonly associated with a Bell Labs colleague, John Johnson, and is called Johnson noise. Furthermore, his communications bandwidth rule was incorporated into a much larger theory by another Bell Labs colleague, Claude Shannon, and the result is now known as the Shannon sampling theorem.

Since electrical current is the movement of electrons, it's not unusual that the thermal motion of electrons would create a current. Since these motions are random, the current is a noise current. John B. Johnson, Nyquist's colleague at Bell Labs, was the first to measure this noise in 1928 [3]. Nyquist did a theoretical study of these currents, and his results can be reduced to a formula,

PdBm = -174 + 10log(Δf)

where P is the noise power at room temperature (measured in decibels with respect to a milliwatt) and Δf is the bandwidth in hertz. Fortunately, a typical analog voice channel (10 kHz) has a fundamental noise floor of -134 dBm, which is well beyond the dynamic range of the human ear.

In 1927, Nyquist found that the rate of identifiable pulses that could be sent through a telegraph channel is limited to the reciprocal of twice the channel bandwidth. Shannon inverted this idea to state that it takes a channel width of twice a frequency to transmit information at a given frequency, a result now called the Shannon sampling theorem, or simply "The Sampling Theorem."

The Institute of Radio Engineers presented its Medal of Honor to Nyquist in 1960 for his "fundamental contributions to a quantitative understanding of thermal noise, data transmission and negative feedback." Nyquist received the National Academy of Engineering Founder's Medal in 1969 for his "many fundamental contributions to engineering."

1. H. Nyquist, "Thermal Agitation of Electric Charge in Conductors", Phys. Rev. vol. 32 (1928), pp. 110ff.
2. H. Nyquist, "Certain factors affecting telegraph speed," Bell System Technical Journal, vol. 3 (1924), pp. 324-346.
3. J. Johnson, "Thermal Agitation of Electricity in Conductors", Phys. Rev. 32 (1928), pp. 97ff.

March 07, 2007

Spring Forward, Fall Back

Daylight Saving Time (DST) begins in the US this Sunday, March 11, 2007. This is three weeks earlier than in the past. This year the daylight savings period has been extended, so it begins on the second Sunday in March, and it won't end until the first Sunday in November. The official changeover is at 2:00 AM, but most of us will set our clocks ahead one hour at bedtime on Saturday. Of course, there will always be a few clocks we'll forget. I usually discover the wrong time on my automobile clock on Monday morning. Since the mid-1990s, the clocks in most videocassette recorders (VCRs) are set automatically through time signals supplied by the Public Broadcasting Service (PBS) using Standard EIA-608-B of the Electronic Industries Alliance.

Why Daylight Saving Time? The primary rationale is to allow more recreational time after working hours. A secondary motivation is energy savings. U.S. Department of Transportation (DOT) studies indicate that DST reduces electrical usage by about a percent. This may seem like a small quantity, but the energy equivalent is about a hundred thousand barrels of oil per day. Possibly more significant, there is also a one percent decrease in traffic accidents and traffic fatalities. There are more accidents in the darker mornings, but these are more than offset by a reduction in the evenings. A ten percent decrease in violent crime was noted in Washington, D.C. after imposition of DST. Of course, parents still decry the fact that their children need to travel to school in darkness in the mornings.

A recent article in Nature [1] lists some important dates in DST history.

• 1784 - Benjamin Franklin proposes shifting clocks to make better use of daylight in a letter to the Journal of Paris [2]. True to his scientific nature, Franklin calculates what the monetary savings would be were his proposal be enacted.

• 1907 - William Willett publishes the pamphlet, "The Waste of Daylight," in an effort to enact DST legislation in the UK.

• 1916 - DST is adopted by Germany during World War I, followed by Britain three weeks later.

• 1917 - Newfoundland is the first North American region to adopt DST.

• 1918 - The US establishes time zones, and also DST. DST was only in effect for one year.

• 1948 - Japan adopts DST while under Allied occupation, but stops in 1951. Today, Japan is the only industrialized country not using DST.

• 1966 - The US formally adopts DST, but states and individual counties are allowed to opt-out. Our Honeywell compatriots in Arizona do not follow DST.

1. Michael Hopkin, "Saving Time," Nature vol. 445, no. 7126, pp. 344f. (25 January 2007).
2. Benjamin Franklin's Essay on Daylight Saving to the Editor of the Journal of Paris, 1784.
3. Daylight Saving Time - general history and anecdotes.
4. Summary of Standard EIA-608-B of the Electronic Industries Alliance.

March 06, 2007

Anti-Reflection Coatings

Reflection is a phenomenon that plagues many optical designs, and that's why anti-reflection coatings are applied to higher-end lenses and other optical components. Refelection is the natural consequence of light passing from air, with a refractive index of 1.0, to an optical medium, such as glass with a refractive index of about 1.5. To use an electrical analogy, there is an impedance mismatch between air and glass, and an impedance mismatch always results in reflection of at least part of an electromagnetic wave. At normal incidence, the reflection coefficient ( the fraction of light reflected) is given by

R = [(no - ns)/((no + ns)]2

where no and ns are, respectively, the refractive indices of air and the other medium. The square term is of special interest, since it shows how bad things can be at high refractive index difference. For the given case of a typical glass, where ns = 1.5, about four percent of incident light is reflected. If the light is passed through a window, where two surfaces are involved, the reflection is doubled. You can see that a complex optical system with many interfaces can have serious reflection problems.

The easiest way to improve an impedance match is with an impedance matching network. This is the approach used in most optics when an anti-reflection coating is applied. If we apply a coating of an intermediate reflective index, we are working against the square term, and we come out winners. The optimum value of refractive index for this layer is the geometric mean of no and ns; namely,

n1 = (no x ns)1/2

For our glass example (ns = 1.5), the optimum index for the anti-reflective coating is 1.225, and the reflectivity is reduced by half. A multiple coating will give better results, but the trick here is that there are few solids with a refractive index near 1.0. You really can't make a stack of as many layers as you would want.

Researchers at Rensselaer Polytechnic Institute have now created a material with a refractive index of 1.05 [1]. Their coating process involves deposition of TiO2 and SiO2 graded-index films deposited by oblique-angle deposition on aluminum nitride. The nanorods form at an angle of precisely 45 degrees on the aluminum nitride. The particular orientation of the nanorods, along with the fact that the nanorod coating is filled with voids, leads to the low refractive index. How important is this research? Their list of funding sources tells the tale: the National Science Foundation, the U.S. Department of Energy, the U.S. Army Research Office, the New York State Office of Science, Technology and Academic Research (NYSTAR), Sandia National Laboratories, and the Samsung Advanced Institute of Technology.

This low index material is useful not just for anti-reflectance coatings, but it can be used for making dielectric mirrors. These are high reflectance mirrors (R>0.9999) formed by interleaving low and high index materials. Dielectric mirrors can be designed to reflect light in just a certain band of wavelengths.

1. J.-Q. Xi1,Martin F. Schubert, Jong Kyu Kim, E. Fred Schubert, Minfeng Chen, Shawn-Yu Lin, W. Liu and J. A. Smart, "Optical thin-film materials with low refractive index for broadband elimination of Fresnel reflection," Nature Photonics vol. 1 (2007), pp. 176 - 179.
2. Rensselaer Researchers Create World's First Ideal Anti-Reflection Coating (RPI Press Release (March 1, 2007).
3. Jeff Hecht, "Nanorod coating makes least reflective material ever." (New Scientist)

March 05, 2007

Hi-Tech Dust Rag

Many years ago, there were people in my research building who used beryllium oxide to produce laser crystals, such as Alexandrite [1] and Lanthanum Beryllate [2]. Not much beryllium oxide was used in toto, and it was handled safely. Beryllium metal, however, is associated with chronic health problems after prolonged exposure, and it's subject to severe safety regulations. Since beryllium is both lightweight and stiff, it's used extensively in the aerospace industry, and also in the fabrication of nuclear weapons.

Ron Simandl, a research chemist at Y-12, a US nuclear weapons plant, has invented a dust rag to protect himself and his colleagues from beryllium contamination [3]. He claims his dust rag, officially called the "Negligible-Residue Non-tacky Tack Cloth," will remove beryllium particles twenty times smaller (about 25 nanometer) than what the eye can see. His invention is not the cloth itself, but a treatment for dusting cloths. Since it has commercial applications, there's a pending patent. The treated rags collect not just beryllium, but other metals, ceramics, plastics, fibers, and radiological contaminants.

What's the secret? Simandl isn't saying much more than the cloths are treated with an organic solvent that he and partner Scott Hollenbeck developed. The functional coating is dry to the touch, and it doesn't feel tacky. Simandl claims that it is tacky on the microscopic level, but he doesn't understand the exact mechanism. Says Simandl, "The physics of tackiness is very complex... There is a good, but not necessarily obvious reason why they work."

Of course, this invention has many consumer applications, and Simandl tested his dust rags on some household tasks. He tried cleaning the alloy wheels on his automobile, and these were cleaned "bright and showroom-shiny." They worked also on his titanium golf clubs.

I checked the US Patent and Trademark Office web site, and Simandl's patent application is not online. I'd like to read it when it's available. It seems that once his coating material is known, a little science can discover similar useful materials.

1. J. C. Walling, R. C. Morris, E. W. Odell, O. G. Peterson, and H. P. Jenssen, "Tunable alexandrite lasers," IEEE Journal of Quantum Electronics, vol. QE-16 (Dec. 1980), p. 1302-1315.
2. H. P. Jenssen, R. F. Begley, R. Webb, and R. C. Morris, "Spectroscopic properties and laser performance of Nd3 + in lanthanum beryllate," Journal of Applied Physics, vol. 47, Issue 4 (April, 1976), pp. 1496-1500.
3. Duncan Mansfield,"Nuclear lab develops powerful dust rag." (Associated Press)

March 02, 2007

Peak Power

One problem with renewable electrical power is that it's not always there when you need it. There's plenty of wind power on windy days, and plenty of solar power on sunny days, but where do you get your power on calm, cloudy days? That's why electrical power storage is important, and there have been many schemes to store such power on a utility grid scale. There are more than a hundred large sodium-sulfur battery systems in Japan with capacities up to 70 megawatt-hours [1]. Although there are three similar systems in the US, it is generally agreed that chemical storage batteries are not an economical solution.

One high-tech idea for electrical energy storage is the use of superconducting magnets to store energy in magnetic fields. Now, a very low-tech approach may make a significant contribution to this energy storage problem [2]. Sietze van der Sluis of the Netherlands Organization for Applied Scientific Research (TNO) in Delft is project leader of "Night Wind," a project to "store" electrical energy in refrigerated warehouses. Says van der Sluis. "The "batteries" are already there, at no extra cost."

Van der Sluis calculated that the equivalent heat capacity of all large cold storage warehouses in Europe is nearly 10,000 megawatt-hours per degree C. Letting the temperature of these warehouses drop by a degree during the night, and then letting them rise by a degree during the day, would allow storage of 50,000 megawatt-hours of energy. All this is done without melting the food, and the capacity of this storage scheme matches the projected 2010 contribution of wind power to electrical supply in the European Union. Of course, one good experiment is worth a hundred theoretical papers, so van der Sluis intends to build a wind turbine in Bergen op Zoom, in southwestern Netherlands, next to the country's largest refrigerated warehouse.

1. Erik Spek, "Battery of Possibilities," Letter to New Scientist (3 February 2007), p. 21.
2. Declan Butler, "Fridges could save power for a rainy day." (Nature Online)

March 01, 2007

Patently Funny

The US patent system has been the subject of increasing criticism for many years. One area invoking the most criticism has been software patents. Opponents of software patents cite numerous examples of patent applications being filed on computer algorithms that have been in common use for many years; and patent applications filed on programming ideas thought by most programmers to be too trivial to patent. In patent parlance, the problem is essentially what should be the "non-obviousness" requirement for being awarded a software patent. Inventors, of course, believe that their ideas, whether software or not, should be protected. Paul Graham, who became a millionaire through selling his internet storefront software to Yahoo!, presented a good overview of this problem from both sides of the fence at a March, 2006, talk at Google [1]. Interestingly, his talk was entitled, "Are Software Patents Evil?" Graham's conclusion is that patent examiners are overworked, and mistakes happen.

Another area of contention is "Business Method" patents. To cite the legal definition used in Australia, a business method is "a method of operating any aspect of an economic enterprise." A fast food clerk asking, "Do you want fries with that?" constitutes a business method. Two better known business method patents are Amazon's "one-click" patent (U.S. Pat. No. 5,960,411), and its method of collecting and posting customer feedback (U.S. Pat. No. 6,525,747).

In an apparent attempt to spotlight the controversy surrounding business method patents, Timothy Wace Roberts filed an application [2] for a "Business method protecting jokes." The abstract says it all,

"The specification describes a method of protecting jokes by filing patent applications therefor, and gives examples of novel jokes to be thus protected. Specific jokes to be protected by the process of the invention include stories about animals playing ball-games... and the joke that consists in filing a patent application to protect jokes."

His claim number seventeen is especially blunt,

"The joke which comprises the filing of a patent application to protect the method of protecting jokes by filing one or more patent applications thereon."

Roberts' patent application is also self-referential, since it claims itself, a condition he calls "homoproprietary." He also lays claim to similar patent applications filed on April first (a.k.a., April Fools' Day).

1. Paul Graham, "Are Software Patents Evil?"
2. Roberts; Timothy Wace, "Business method protecting jokes," United States Patent Application 20060259306.
3. Patent protection for jokes (new Scientist).