Everything, Nothing ...

Thursday, February 28, 2008

鲫鱼汤做法

冬季食鲫鱼正当时令,民间素有"秋鲤冬鲫"之说,这时的鲫鱼不仅肉质细嫩,味道鲜美,且具较高营养价值。每百克鱼肉含蛋白质13克、脂肪1.1克,并含有丰富的钙、磷、铁、硒、锌以及多种维生素,明代医学家李时珍赞鲫鱼曰"冬月肉厚子多,其味尤美"。故在冬令时节品尝几款鲫鱼菜肴别有一番风味。
  鲫鱼汤人人都会做,单做好它还真不是那么简单,好的鲫鱼汤做出来是鲜味四溢,汤汁乳白,这里面有很多诀窍:
  第一是选料。做汤的鲫鱼一定不能太小,一般3两左右的就OK了,太小鲜味不是太浓,再就是鲫鱼一定要新鲜,最好是刚死了4个小时以内的,先把鱼杀好洗干净,记得一定要把肚子里面紧贴的一层黑膜都去掉,那东西有苦味。
  第二是烹调时“吊鲜“。先要把鱼身上抹一些干淀粉,这样有两个目的:一是使煎鱼时能保持鱼型的完整,特别是鱼皮,否则缺皮少肉的多扫兴呀。二是裹住鱼身,可以均匀受热,整条鱼煎后都会金灿灿的,不会有的地方都胡了,有的地方还生。记住要在切开的鱼肉层里也抹一些淀粉啊。下面是步骤:
  1、煎鱼。锅保持干爽,用拍裂的姜块在锅上擦一遍,姜汁更加有利于保持鱼皮和锅面的分离。(姜块别扔,我们还要用!!)倒入清油或花生油约50克,多点也没关系。用小匙舀一匙猪油,一起防入锅里。动物性脂肪是爱美人士的大忌,如果你连3—5克的用量也接受不了,那就改用黄油吧。风味上差异并不明显。加它们的作用主要是吊鲜和增浓鱼汤,二选一一定要放哦。
  2、鱼的两面都煎好后,直接从暖壶向锅里倒热开水,用量约为3/4汤碗,总之比食用量稍多些就行,自己斟酌吧。关键是一定要用热水,而且要一次性加足,不能把鱼汤炖了半天以后看水烧的少了再加点水,要是那样的话这鱼汤就大打折扣了,还有就是火不能用小火,要让锅里的水一直保持沸腾,这样出来的汤才白。因为用冷水会使蛋白质骤然收缩,肉质纤维变老,水解蛋白析出也变得困难,不利于奶汤炖出,口感就大打折扣了。切记!!!!
  3、加好热水后,放入刚才用过的姜块,还有葱段和糖大约3—5克, 注意糖一定别忘。别急着加盐,盖好锅盖,待汤用大火烧开后,再加入盐,这也是吊鲜很重要的一步哦。这时可以尝尝汤味,主要是调准咸度,最好稍微淡一点,因为水分熬去一些后,味道会加重的。
  4、这时把火头转小,盖好锅盖用文火炖10至15分钟。中间记得把鱼再翻动一次。鱼汤就做好了。可以依个人喜好把葱和姜块剔除也行。怎么样,现在汤浓似奶吧,快盛进汤盆就上桌吧。
  这里吊鲜共有四个关键,找到了吗?对:1、加猪油或黄油,2、加热水成汤,3、加糖和红枣,4、加盐的时机。注意:味精或鸡精什么的在这里不受欢迎。
  第三是上桌后“品鲜”。很多人认为应该先吃饭菜后喝汤,对于清淡汤水或可,这道鲫鱼浓鲜汤却不是这种吃法。要想真正体会汤的鲜,只有先喝它啦。趁热喝下,最能享受到这种美味。加入米饭做捞饭,也是很讲究的上选。因为汤中有荤油,而且鱼类稍凉会有淡淡腥气盖过鲜味,所以上桌后宜热吃。因为炖的时间正好,鱼肉也很嫩,汤、肉味道会都很上口。先喝汤也会因控制总体进食量而有助于保持体形,营养丰富的浓汤对胃和身体都有好处。相信我,你不会因为加入那点猪油而后悔的。

Materials research and the ‘energy crisis’

http://dx.doi.org/10.1016/S1369-7021(08)70035-X

ScienceDirect - Materials Today Materials research and the ‘energy crisis’

Volume 11, Issue 3, March 2008, Page 64

Opinion

Materials research and the ‘energy crisis’

David Cahena,
aWeizmann Institute of Science, Israel

Available online 15 February 2008.



Forget about ‘the’ solution. Instead, we need to work toward a strong, sustainable mosaic of many solutions.






I will try to outline my credo for why and how, from the basic science point of view, we should tackle energy-related materials research issues.

In the title, I put energy crisis in inverted commas. Why? Because for at least the next few hundred years, we do not lack a relatively cheap source of energy – we have coal (and coal liquefaction and gasification are known processes).

Unfortunately, because most coal is rather dirty, this leads to assured environmental and highly likely climate problems. Even those that doubt the latter cannot ignore the former. Visit some of the world's emerging industrial areas and you can quote from the 1960s Tom Lehrer song Pollution, “Don’t drink the water and don’t breathe the air,” and add in land pollution, although it does not rhyme. Thus, we find the true driving forces for weaning us off fossil fuels and for developing alternative, sustainable energy resources (ASER).

And there is another strong materials reason: it is an utter waste to just burn oil, as it is such a valuable natural resource. The long-term interest of oil-producing countries is to use oil as a natural materials resource. Indeed, legendary Saudi oil minister Yamani's quote, “the stone age did not end because of a lack of stones,” says it all.

Before giving my view on the roles for basic science in developing ASER, I should stress the importance of energy conservation to reduce the amount of fossil fuel needed to get the same amount and types of work done. There are materials-related issues such as improving insulation, reducing friction between moving parts, improving materials for natural lighting, and recycling materials. To these we can add improving waste heat use, reducing waste energy, and increasing the efficiency of current power-generation options (including renewable ones such as solar water heaters) with less pollution. Admittedly, these are not the glamor topics that may please your favorite journal's editor, as much as the following ones, but they are critical in the short term.




© The New Yorker Collection 1998 Frank Cotham from cartoonbank.com. All Rights Reserved.


Past experience teaches us that results of breakthroughs in basic science today will only start to be felt after some 15–20 years. Well, one may say, that is fine. But here is the catch. Basic scientific research in ASER decreased so much after the drop in oil prices in the early-1980s that we have a very narrow base of relevant fundamental science on which to build new technologies. That is why we need to kick start long-term support for the sorely needed basic research now.

Now let me stick my neck out: if I were a research program manager, I’d support:

1. Exploratory, blue-sky basic research per se as the best proven way to stumble on new ideas† ;

2. Optics – cheap ways to use larger parts of the solar spectrum in quantum conversion systems such as photo-(bio)chemical and photovoltaic devices;

3. Heterogeneous catalysis for reduction reactions‡ to bring us closer to the holy grail of efficient cheap artificial photosynthesis and find replacements for noble metal catalysts; and

4. Plant science.

Clearly, especially (2) and (3) present major challenges for materials research.

I am aware that my own field of photovoltaic materials is not mentioned specifically, although it definitely is in (2). Instead, I have tried to give a more holistic picture for a purpose. ASER suffered and suffers still from too many claims of the solution and we, the researchers, are the culprits. Even if such claims bring publicity, in the end they harm the whole area. It is my opinion that there is not one solution. Instead, we need to work toward a strong, sustainable mosaic of many solutions, which, as a whole, will provide the solution.


† We should, though, make sure that researchers are aware of the issues, i.e. Louis Pasteur's famous dictum, “Chance favors the prepared mind”, applies! [cf. Nat. Mater. (2008) 7, 93.]
‡ Basic catalysis research today is directed mostly towards oxidation – the oil industry's interest, as their basic starting materials are reduced carbon. Furthermore, it focuses on homogeneous catalysis, which provides only a small fraction of today's industrial catalysts.

The 21st century engineering graduate

http://dx.doi.org/10.1016/S1369-7021(08)70002-6

ScienceDirect - Materials Today The 21st century engineering graduate

Volume 11, Issue 3, March 2008, Page 6

Comment

The 21st century engineering graduate

Ruth Grahama, , EnVision Project Director
aImperial College London, UK

Available online 15 February 2008.



There is a growing appreciation in the academic community of the need for change in engineering undergraduate education.






Move to change the way we educate engineers comes in response to a number of factors, such as growing demands from industry for graduates with a broader skill base, increasing international competition in engineering education, and the motivations and experiences of prospective students.

The EnVision project at Imperial College London, UK, aims to transform undergraduate engineering education – who we teach, what we teach, and how we teach. This is clearly an ambitious goal – it is a huge challenge to transform undergraduate education and related support activities across nine engineering departments of 3000 undergraduates.

The initial phase of the project, following its establishment in March 2005, required the answering of a simple question: “What, if anything, do we need to change?” Surveys were conducted of over 2500 of our stakeholders – students, alumni, academics, industry, and professional bodies – alongside a study of the educational developments at the best institutions for engineering education across the world. These studies revealed a remarkable degree of consensus over what changes were required, particularly in the description of the ‘ideal’ engineering graduate, as well as a number of more unexpected findings. I was particularly interested in the extent to which students, graduates, and industry employers saw sustainability – the ability to take a leading role in designing solutions to local, national, and global challenges affecting society – as an important theme in the skill set of the engineering graduate of the future.

After drawing together and assessing this information, the themes of EnVision were identified:

1. Improve and sustain our ability to recruit the most able students;

2. Improve the motivation and engineering aspirations of undergraduates;

3. Ensure that our graduates possess the skills, knowledge, and attitudes necessary to become internal leaders in engineering in both industry and academia;

4. Enhance the faculty's educational provision, through the transformation and development of both curricular and noncurricular activities;

5. Significantly improve the physical learning environment and facilities in the faculty; and

6. Improve the environment for support, reward, and celebration of excellent teaching.

We have since been working with students, academics, and other key groups to determine exactly how these changes should be designed and implemented. With the project now in its implementation phase, a large number of activities are already underway.

One critical element of the project is to effect a cultural change by which teaching excellence is promoted, rewarded, and celebrated. Last June, Imperial's Faculty of Engineering presented the Inaugural Awards for Teaching Excellence in Engineering Education. The awards recognize and reward individuals and small teams renowned for the excellence of their teaching. Three awards were presented, each of value at €13 500, and this has now been established as an annual event.

Another strong theme in EnVision is the establishment and support of undergraduate projects that broaden the personal and professional leadership and communications skills of students. The projects encourage them to apply their theoretical knowledge to complex real-life engineering situations. This helps to motivate and inspire them to a career of lifelong learning in engineering. One example is Racing Green, which brings students together from across the engineering faculty to design, build, and race a zero-emission electric hybrid fuel cell racing cart. Students from six engineering departments are working together, using cutting-edge technology to develop a race vehicle ready for its first time trial this year.

A major driving force of EnVision is to shift the focus of engineering education towards ‘learning by doing’. This allows students to deepen their theoretical understanding and develop their professional skills by applying their engineering knowledge in real-world situations. To reflect this shift in the way engineering is taught at Imperial, we are also developing new teaching and learning spaces, which symbolize and inspire the imagination and creativity evident in the very best achievements of engineering technology and practice.

External communications and network building are also key elements of the project. In September 2007, we organized a high-level strategy forum to explore the issue of how to equip the engineering graduate of the future to take a leading role in designing solutions to local, national, and global challenges affecting society and the world around us. This event responded directly to calls from many of our stakeholders for an increased acknowledgment of sustainability in undergraduate engineering education. It brought together decision makers from the academic community, government, professional bodies, nongovernmental organizations (NGOs), industry, and student groups and was a resounding success.

Although the Envision project is still in an early stage of implementation, it already seems to be making a significant impact. I can see that the positive changes we are starting to make at Imperial through this program are producing some significant waves, and I believe we have a real opportunity to set a new benchmark for engineering education worldwide.

Duplicity in publishing papers

http://dx.doi.org/10.1016/S1369-7021(08)70001-4

ScienceDirect - Materials Today Duplicity in publishing papers

Volume 11, Issue 3, March 2008, Page 1

Editorial

Duplicity in publishing papers

Jonathan Wood, Editor, Materials Today

Available online 15 February 2008.



Tools that compare papers for similarities could reduce plagiarism and duplicate publications.






How big a problem is plagiarism? We might suspect that a fair amount of dubious publishing practices go on, such as publishing the same results more than once, submitting a paper to more than one journal, or reproducing others' work without acknowledgement. It seems to have become more prevalent as pressures to publish increase.

But how do we begin to identify incidences of plagiarism? It is surely not enough to go by anecdotal reports, or rely on researchers to highlight cases as they come across them by chance in the literature. Well, it's probably not that difficult. Computer-based tools that search documents for similarities have been around for a while and are widely used to identify cheating in school and college exams. Now Harold Garner and Mounir Errami at the University of Texas Southwestern Medical Center have developed a computer code that checks multiple documents for duplication of key words and compares word order and proximity. Applying their eTBLAST program to >7 million abstracts in Medline, 70 000 papers came back as being highly similar [Nature (2008) 451, 397].

Now there are a number of circumstances where significant duplication is to be expected and is wholly ethical. These include updates to clinical trials, conference papers, and corrections to papers. But Garner and Errami are determined in their pursuit of unethical practices. They have placed the 70 000 potential duplicate papers on a publically available database, Déjà vu (http://spore.swmed.edu/dejavu), and have begun to check them manually (quite a task!). Already, one suspected case of plagiarism has reportedly resulted in an investigation by a journal. “We can identify near-duplicate publications using our search engine,” says Garner. “But neither the computer nor we can make judgment calls as to whether an article is plagiarized or otherwise unethical. That task must be left to human reviewers, such as university ethics committees and journal editors.”

That does point to one problem: whose responsibility is it to chase cases of unethical practice, journals or universities? The simplicity of instigating such computational tools probably points toward the journals in identifying suspect papers (and many publishers are jointly investigating the best way to proceed), but subsequent steps are more open to debate.

I am convinced that it would be a very worthwhile step. Caution needs to be applied with suspect papers and common standards about how much new work is needed for a new paper need to be agreed upon, but clear-cut cases should be discovered in this way. But it does lead me to wonder what else is possible? Can figures be checked for manipulation or duplication? This is where scientific fraud and misrepresentation of data most often happen.

The top ten advances in materials science

http://www.materialstoday.com/archive/2008/11-01/top10.html

The top ten advances in materials science

What are the defining discoveries, moments of inspiration, or shifts in understanding that have shaped the dynamic field of materials science we know today? Here’s what we think are the most significant.

December 19, 2007

Jonathan Wood
Editor, Materials Today
E-mail: j.wood@elsevier.com

Read this article in pdf format

The ending of one year and the beginning of the next is a strange time. It is very human to mark the passing of time, remembering what has been done before looking forward to what’s to come. As the New Year arrives, whether you prefer peaceful reflection or joyous celebration, awards or resolutions, one thing is clear from any newspaper or magazine. It is, above all, a time to draw up lists. Who are we to disagree?

We’ve assembled a list of the top ten advances in materials science over the last 50 years. We thought long and hard. We sought the advice of our editorial advisory panel and asked leaders in the field to add their own contributions. We hope the results are interesting and thought-provoking.

In making the final selection, we have tried to focus on the advances that have either changed our lives or are in the process of changing them. This is arguable, of course. Should an advance alter all our daily lives, or does fundamentally changing the research arena count? What about discoveries that can be clearly attributed to a certain date and investigator, or those developments that have come about incrementally through the efforts of many? Where does materials science stop and electronics, physics, or chemistry begin? And how do you assess the value of things like plastic bags? Undeniably they are a boon for carrying shopping but now also an item of scorn for energy and waste reasons.

Instead of ruling any of these out, we’ve tried to come up with a balanced selection. In doing so, we hope to start some debate about the discoveries that most mark out today’s materials science. Let us know what we’ve missed. If you’re incredulous that organic electronics or high-temperature superconductors aren’t in the top ten, tell us why. Should Kevlar, Post-it notes, float glass, or F1 racing tires be in the list? What will define the next 50 years of materials science?

If you believe materials scientists are unsung heroes, that our work goes unnoticed and unheralded, here is your ammunition. With our time limit of 50 years, the list is of immediate relevance. It is about how materials science is affecting our world today, now.

1. International Technology Roadmap for Semiconductors


Semiconductor research is guided by the ITRS. (Courtesy of SEMATECH.)
OK, so it’s not a research discovery, solely a way of organizing research priorities and planning R&D. But the International Technology Roadmap for Semiconductors (ITRS) is a remarkable achievement (see The history of the ITRS). It sets out goals for innovation, technology needs, and measures for progress that all can sign up to in the fiercely competitive microelectronics industry.

A mixture of science, technology, and economics, it’s hard to see how the ITRS could do better in driving forward advances in this area, whether it’s in materials, characterization, fabrication, or device design. And it is an appropriate first choice in this list. Not only is electronics absolutely critical to our modern world, progress in semiconductor processing and advances in materials science have gone hand-in-hand for the last 50 years.

Let’s just hope the International Panel on Climate Change enjoys similar success in driving innovation and reaching agreed goals.

2. Scanning probe microscopes

The invention of the scanning tunneling microscope (STM) by Heinrich Rohrer and Gerd Binnig at IBM’s Zurich Research Laboratory was deservedly awarded the Nobel Prize for Physics in 1986.


Rohrer (left) and Binnig (right) with a first-generation scanning tunneling microscope. (Courtesy of IBM Zurich Research Laboratory.)
Not only is this a new microscopy technique – remarkable enough in itself – but it provides a way to probe the local properties of a sample directly with nanometer resolution. Quickly followed by the atomic force microscope (AFM), this new access to the nanoscale world (see Making sense of the nanoworld), arguably brought about the current ubiquity of nanotechnology. The invention immeasurably increased our abilities at this scale.

3. Giant magnetoresistive effect

The 2007 Nobel Prize for Physics went jointly to Albert Fert of Université Paris-Sud, France, and Peter Grünberg of Forschungszentrum Jülich, Germany, for independently discovering the giant magnetoresistance (GMR) effect in 1988. So it is no surprise to see this advance on our list.

GMR describes the large change in electrical resistance seen in stacked layers of magnetic and nonmagnetic materials when an external magnetic field is altered. Thanks largely to the subsequent work of Stuart Parkin and coworkers at IBM Research, the phenomenon has been put to great effect in the read heads in hard disk drives. These devices are able to read out the information stored magnetically on a hard disk through changes in electrical current.

The high sensitivity of GMR read heads to tiny magnetic fields means that the magnetic bits on the hard disk can be greatly reduced in size. The phenomenal expansion in our ability to store data that we continue to witness today can be traced back to this discovery.

4. Semiconductor lasers and LEDs

The development of semiconductor lasers and light-emitting diodes (LEDs) in 1962 is a great materials science story (seeThe III-V laser and LED after 45 years). They are now the basis of telecommunications, CD and DVD players, laser printers, barcode readers, you name it. The advent of solid-state lighting is also likely to make a significant contribution to reducing our energy usage.

5. National Nanotechnology Initiative

Bill Clinton gets some of the credit for the fifth materials science development on our list. He was the US president who announced the establishment of the National Nanotechnology Initiative (NNI) in 2000, a US federal, multi-agency research program in nanoscale science and technology.

The NNI has had an immense impact. It cemented the importance and promise of a nascent, emerging field, establishing it immediately as the most exciting area in the whole of the physical sciences. Nanotechnology simultaneously gained an identity, a vision, and a remarkable level of funding through the initiative. It also established a method of funding interdisciplinary science in such a way that the rest of the world would have to try to match.

Mihail C. Roco of the National Science Foundation was one of those who was involved in the initial NNI vision setting and national organizational efforts. “During 1997 to 1999, I worked with an initially small group including Stan Williams, Paul Alivisatos, James Murday, Dick Siegel, and Evelyn Hu,” recalls Roco. “We envisioned a ‘new industrial revolution’ powered by systematic control of matter at the nanoscale. With this vision, we built a national coalition involving academia, industry, and a group of agencies that become the nucleus of the NNI, launched in 2000.”

The NNI now involves 26 independent agencies and has an estimated budget of ~$1.5 billion in 2008. It has been the largest single investor in nanotechnology research in the world, providing over $7 billion in the last seven years. Now 65 countries have national research focus projects on nanotechnology, while industry nanotechnology R&D has exceeded that of governments worldwide. The global nano-related R&D budget was in excess of $12 billion in 2007.

On behalf of the interagency group, Roco proposed the NNI on March 11, 1999 at the White House Office of Science and Technology Policy (OSTP). The fear of many was that there was little chance of nanotechnology becoming a national priority program. Surely it would be perceived as being of interest just to a small group of researchers? Instead, by defining nanotechnology as a broad platform for scientific advancement, education, medicine, and the economy, the NNI was approved with a budget of $489 million in 2001. “The NNI was prepared with the same rigor as a science project,” says Roco.

6. Carbon fiber reinforced plastics


Carbon fiber-reinforced plastics were at the heart of this bike built by Lotus Engineering for the 1992 Barcelona Olympics. It helped Chris Boardman win gold. (Courtesy of Lotus.)
The last 50 years have seen advanced composites take off – quite literally, in that many applications of these light but strong materials have been in aviation and aerospace. But modern composite materials have touched just about all industries, including transport, packaging, civil engineering, and sport. They can be found in Formula 1 cars, armor, and wind turbine rotor blades.

Leading the charge are carbon fiber reinforced plastics or, more properly, continous carbon fiber organic-matrix composites. These materials bond extremely stiff, high-strength carbon fibers into a polymer matrix to give a combined material that is also exceptionally tough and light in weight.

The early 1960s saw the development of carbon fibers produced from rayon, polyacrylonitrile, and pitch-based precursors. The long, oriented aromatic molecular chains give the fibers exceptional strength and stiffness. This was a real gain over the amorphous glass fibers used previously in composite materials.

The development of carbon fibers, together with advances in design, modeling, and manufacturing, has given rise to composite materials with controlled, specific properties. “Rather than an engineer using a constant set of material characteristics, organic-matrix composites and the associated manufacturing methodology now enables the engineer to design the material for a specific application,” says Richard A. Vaia of the Air Force Research Laboratory. “The manufacturing science has opened up new frontiers, effectively moving component design down to materials design.” The spectacular gain in performance has seen the increasing use of these materials despite the cost and increased difficulty in design, shaping, and recycling, such that the new Boeing 787 uses composites extensively in its wings and fuselage.

7. Materials for Li ion batteries

It is hard to remember how we coped before laptops and cellular phones came along. This revolution would not have been possible without a transition from rechargeable batteries using aqueous electrolytes, where H+ is the working ion, to the much higher energy densities of Li ion batteries.

Li ion batteries required the development of novel electrode materials that satisfy a number of considerations. In particular, the cathode needs a lightweight framework structure with free volume in between to allow a large amount of Li ions to be inserted and extracted reversibly with high mobility.

The process of materials design and discovery involved a mixture of clever chemical and electrochemical intuition, rational assessment of the technical requirements, and substantial experimental effort, and is dominated by the work of John B. Goodenough and colleagues at the University of Oxford in the 1980s. They came up with the cathode material LiCoO2 that Sony combined with a carbon anode in 1991 to give us the batteries that make possible the portable devices we know today. Work continues to develop cathode materials without the toxic Co and with three-dimensional framework structures like LiFeO4 for environmentally benign, high-energy density batteries.

8. Carbon nanotubes


Viewgraph showing a single- or double-walled CNT published in 1976. (Reprinted with permission from Oberlin, A., et al., J. Cryst. Growth (1976) 32, 335. © 1976 Elsevier.)
Although a discovery normally attributed to Sumio Iijima of NEC, Japan in 1991, the observation of nanotubes of carbon had actually been made on previous occasions (see A journey on the nanotube). However, following on from the excitement of the discovery of C60 buckyballs in 1985 – a new form of carbon – Iijima’s observations of new fullerene tubes aroused great interest immediately.


Today, the remarkable, unique, and phenomenally promising properties of these nanoscale carbon structures have placed them right among the hottest topics of materials science. So why are they only at number eight in this list? Well, there still remains much to sort out in their synthesis, purification, large-scale production, and assembly into devices. And there’s also the very frustrating inability to manufacture uniform samples of nanotubes with the same properties.

9. Soft lithography

The ability to fabricate functional structures and working devices in different materials is central to the production of microelectronic devices, data-storage systems, and many other products. This process is almost exclusively carried out by highly specialized, complex, and very expensive photolithography equipment confined to the controlled environments of cleanrooms. How valuable, then, is the introduction of an alternative?

Soft lithography makes use of the simple, ancient concept of using a stamp to produce patterns again and again. It can be used on many different substrates, be they flat, curved, or flexible. What’s more, soft lithography is cheap, offers nanoscale resolution, and can be applied to new areas in biotechnology and medicine.

The initial technique of microcontact printing (µCP) was developed in 1993 at the lab of George Whitesides at Harvard University. “Microcontact printing has revolutionized many aspects of materials research,” says Byron Gates of Simon Fraser University, Canada. “Molecules are transferred to a substrate using an elastomeric stamp. This poly(dimethylsiloxane) or PDMS stamp conforms to the substrate, unlike hard masks used in previous lithography techniques.” In this way, molecules can be printed over large areas in well-defined patterns with features just 30 nm in size. As well as the transfer of small organic molecules, µCP has been adapted to print solid materials directly, extending its capabilities into nanofabrication. Since 1993, µCP has expanded into a suite of printing, molding, and embossing methods known as soft lithography. All of them use an elastomeric stamp to reproduce a pattern from a master template over and over again.

“All these techniques share one thing: the use of organic materials and polymers – ‘soft matter’ in the language of physicists,” says Younan Xia of the University of Washington in St. Louis. “Soft lithography offers an attractive route to microscale structures and systems needed for applications in biotechnology, and most of them exceed the traditional scope defined by classic photolithography.”

10. Metamaterials


The metamaterial structure of an invisibility cloak that hides objects from microwave radiation. (Credit: David Schurig, Duke University.)
The beginning of the new millennium brought great excitement when it was conclusively demonstrated that a material with a negative refractive index could exist. Light, or at least microwaves, would bend the ‘wrong way’ on entering this material, according to a standard understanding of Snell’s law of refraction. This ended a long-standing argument over Veselago’s prediction in the 1960s that materials simultaneously having a negative permeability and a negative permittivity would have a negative refractive index. At the same time, it opened up a perplexing new optical world full of counterintuitive results that can be explained using 19th century classical electromagnetism.

But the surprising optical properties don’t arise from the material’s composition as its structure. The first metamaterial was a composite of metal wires and split rings assembled on a lattice of printed circuit boards. It was an example of a metamaterial – an artificial structure of repeated micro-sized elements designed for specific properties.

“Metamaterials derive their properties as much from their internal structure as from their chemical composition,” explains John Pendry of Imperial College London, UK. “Adding structure to chemistry as an ingredient greatly increases the range of properties that we can access. There is a new realization that metamaterials can give access to properties not found in nature.”

Crucially, if the structure of the material is much smaller than the light’s wavelength, then an overall permittivity and permeability of the material can still be used with Maxwell’s equations to describe the electric and magnetic response of the material. Thin wire structures can generate a negative electrical response at gigahertz frequencies, while split-ring structures generate a negative magnetic response. These structures were combined for the first time in 2000 by David Smith, Willie Padilla, and Shelly Schultz at the University of California, San Diego to make a negatively refracting material. “Now many people are going through a process of feverish invention as new possibilities are explored, pushing the concept up in frequency towards the visible and also downwards, even to create novel dc responses,” says Pendry.

“Theorists too have been inspired,” adds Pendry, who pointed out that a negative refractive index could be used to construct a ‘perfect lens’. Such lenses would have a resolution unlimited by fundamental physics of the design, and only limited by quality of manufacture. “A new approach to subwavelength imaging now rides on the back of the metamaterial concept,” he says. Several suggestions for invisibility cloaks to hide objects from electromagnetic radiation have also been made. All of these proposals imply the use of metamaterials to realize their designs.

“The first applications [of metamaterials] will be simple improvements of existing products,” Pendry expects. “For example, lightweight lenses for radar waves have been manufactured using metamaterials. Then entirely novel applications will follow, probably developed by the research students of today’s metamaterials researchers.”

The history of the ITRS
The ITRS provides a guideline for research and development for integrated circuit technology needs within a 15-year horizon. Updated annually, the ITRS evolved from a series of workshops and assessments conducted by industry leaders in the late 1980s to ascertain precompetitive critical needs. The first national technology roadmap efforts began in 1992 and in 1993 the first Semiconductor Technology Roadmap effort was sponsored by the Semiconductor Industry Association, supported by the Semiconductor Research Corporation, and edited and produced by SEMATECH.

In 1994, the roadmap was updated by a team of over 400 technologists and renamed the National Technology Roadmap for Semiconductors (NTRS). In 1997, the NTRS began to emphasize the challenges, technology requirements, and potential solutions for each roadmap topic. The NTRS was reviewed for the first time in 1998 by an international team that included technologists from Europe, Japan, Korea, and Taiwan. The first ITRS was produced in 1999, the first ever international industry roadmap of its kind.

The ITRS is based on the consensus of a substantial team. More than 1200 participants were involved from industry, national laboratories, and academia in 2005 and 2006. As the manufacturing of semiconductors becomes more challenging, the ITRS teams are expanding the role of roadmapping into new topics with the potential of guiding the industry beyond complementary metal-oxide-semiconductor systems. The new 2007 edition will have 18 chapters and over 1000 pages, it is estimated.

Linda Wilson, ITRS managing editor, SEMATECH, and Alain Diebold, College of Nanoscale Science and Engineering, University at Albany, State University of New York



Making sense of the nanoworld
The fabrication of the first STM in March 1981 in IBM’s Zurich Research Laboratory made it possible for the first time to produce real-space images of electrically conductive surfaces with subnanometer spatial resolution. The development of the AFM in 1986 at IBM Almaden Research Center and Stanford University permitted explorations to be extended to electrically insulating and biological materials.

These two inventions have opened doors into the nanoscale world, and ultimately to nanotechnology. Looking at individual nano-entities such as single molecules, how they react to an external stimuli, how they move and dance on a surface, and how they recognize and talk to each other is no longer science fiction. Moreover, these nanotools allow the manipulation of individual nano-objects and enable scientists to gain a quantitative insight into their physical and chemical properties. Thus they have become crucial in optimizing the performance of nanodevices.

The ultimate impact of these tools will surely cover a huge range of disciplines, including materials science, (opto)electronics, medicine, catalysis, and they will offer new solutions to key problems such as energy and the environment.

In the end, SPM techniques are all about the five senses. Sight is achieved by gently touching surfaces. Hearing: the acoustic response of the tip allows detailed insights into the mechanical properties of surfaces. The same tips, once functionalized with well-defined groups, can identify functional groups through molecular recognition, thus they can finally smell and taste the new and thrilling perfume and flavor of the nanoworld.

Paolo Samorì, ISIS-ULP/CNRS, Strasbourg, France and ISOF-CNR, Bologna, Italy



The III-V laser and LED after 45 years
A significant fraction of the Earth’s population has, by now, seen an LED. But few are aware it is not a conventional light source, rather an electronic source related to the transistor.

As John Bardeen’s (one of the inventors of the transistor) first student and then colleague for 40 years, I heard him explain many times that it was not known until the transistor that a current could create a nonequilibrium electron-hole population in a semiconductor. Subsequently, electron-hole recombination could re-establish equilibrium, delivering light.

As we studied recombination for transistor reasons, we were on the path to the laser and LED, especially when we moved to the direct-gap III-V compounds. Studying GaAs for tunnel diodes in 1960–62, I was not happy with its 1.4 eV (infrared) bandgap. I learned how to shift GaAs towards GaP, to GaAs1–xPx and red light wavelengths.

In 1962, a small number of us realized that the GaAs p-n junction might serve as the basis of a laser. But I wanted to work not in the infrared, but with GaAs1–xPx in the visible region where the eye sees. I knew enough about lasers to know I needed a cavity to help my red p-n junctions become lasers.

My astute colleague at General Electric (GE), Bob Hall, was one step ahead of me. He made GaAs diodes with Fabry-Perot resonator edges, with the crystal itself the cavity – very clever! He preferred polishing to make his diode cavities and I preferred cleaving (not so easy).

Then, one early fall day, Hall’s boss called me to tell me that Hall was running a laser, and would I please give up cleaving! I devised at once a simple method to polish my diode Fabry-Perot cavities, and immediately had red III-V alloy lasers and LEDs.

With Hall’s infrared GaAs lasers and incoherent emitters and my visible, red GaAs1–xPx lasers and LEDs, GE announced the availability of these devices for sale late in 1962. The red LED was practical from the beginning, and only got better and cheaper over time.

Now, after 45 years of work by many people, the high-brightness, high-performance LED promises to take over lighting. The scale and variety of what is happening is surprising, totally unbelievable. Since we are talking about an ‘ultimate lamp’, this work won’t stop, will only grow and, of necessity, become cheaper. This will make the universal use of the LED possible – appearing everywhere in lighting and decorating!

Nick Holonyak, Jr., University of Illinois at Urbana-Champaign



A journey on the nanotube
Sumio Iijima reported the observation of multiwalled carbon nanotubes (CNTs) in 1991 [Nature 354, 56]. Then in 1993, two independent groups, Iijima and Ichihashi [Nature 363, 603] and Bethune et al. [Nature 363, 605] reported the growth of single-walled CNTs in the same issue of Nature.

The impact of these papers on the scientific community has been tremendous, perhaps leading to the birth of nanoscience and nanotechnology.

However, the first direct observation of multiwalled CNTs was recorded in 1952 by Radushkevich and Lukyanovich [Zurn. Fisic. Chim. (1952) 26, 88], while an image of a single- or possibly double-walled CNT was published in 1976 by Oberlin et al. [J. Cryst. Growth (1976) 32, 335].

Aside from the controversy surrounding their discovery, the tremendous mechanical, electrical, and thermal properties of CNTs combined with a low density promise to revolutionize materials science.

Applications are appearing in integrated nanoelectromechanical systems working in the gigahertz frequency band, exquisitely sensitive mechanical sensors, ultrasharp scanning probe microscopy tips, nanosized drug delivery vehicles, and so on. Moreover, using CNTs as fiber reinforcements could lead to innovative new composite materials.

Even if miniaturization tends to be the focus for CNTs, in mechanics there is also the opposite trend because the human scale is the meter. CNTs are strong and stiff mainly because they are small and thus nearly defect-free – their best attribute. Thus, controlling and minimizing defects while scaling up CNT structures would be a real breakthrough.

For example, a macroscopic cable having the same strength-to-density ratio as a single, defect-free nanoscopic CNT would allow us to build fantastic structures such as a terrestrial space elevator. Here, a cable attached to the planet’s surface could carry payloads into space.

Alternatively, if CNT materials that mimic the hairs on the feet of spiders and geckos could be scaled up, a Spiderman suit for clinging to walls would be within the reach of all of us. There is also plenty of room at the top.

Nicola Pugno, Politecnico di Torino, Italy