3. April 2014   12:39 pm
Dr. K. L. Mittal, Dr. Robert H. Lacombe

Dr. K. L. Mittal, Dr. Robert H. Lacombe

Is Nylon Hydrophobic, Hydrophilic or Maybe Both?

In a recent posting on LINKEDIN Scott Sabreen Owner-President, The Sabreen Group Inc. Initiated the following discussion:


Nylons are inherently difficult to bond because they are hydrophobic, chemically inert and possess poor surface wetting. …

Nylons are hygroscopic and will absorb moisture in excess of 3 percent of its mass of water from the atmosphere. Moisture, in and of itself, creates adhesion problems.  …


Hold on, on the face of it the above remarks would seem to be mutually contradictory.  Is nylon hydophobic or hydrophilic?

The resolution of this apparent paradox comes in recognizing that the hydrophobic behavior of nylon is a surface property and the hydrophilic behavior is a bulk property.

Since nylon is an organic polymer it has a relatively low surface energy as do most polymers.  This is a consequence of the surface chemistry and surface physics of polymers and other organics as discussed in the previous edition of this blog.

However the amide groups in the nylon chain attract water and they give rise to the hydrophilic behavior of this material in regard to BULK ABSORPTION of water.  A number of other polymers such as the polyimides also behave in a similar manner.

So in the bulk nylon can behave as a hydrophilic material but on the surface it can exhibit hydrophobic behavior. Just another hidden property of surfaces that make them both tricky and fascinating to study.

The author invites any inquiries or comments.

3. March 2014   12:35 pm
Dr. K. L. Mittal, Dr. Robert H. Lacombe

Dr. K. L. Mittal, Dr. Robert H. Lacombe

Origin of Surface Energy

In the December 2013 issue of this blog we noted that nearly all the information we commonly acquire about any given surface comes from the radiation reflected from it.  We also noted that of the entire range of the electromagnetic spectrum available that a surface could possibly radiate, we detect only the so called visible spectrum which amounts to barely 2% of what could be emitted. Thus what our eyes alone tell us about what is happening on a given surface is very limited indeed.  Not only that but the type of information is limited to basically the bulk geometry, gross surface morphology and color.

The fact of the matter is that an entire universe of important physical properties remain invisible to our eyes.  The invisible  property we want to explore here in fact cannot be seen even in principle.  This property is what is called the surface energy and to get a picture of it we need the apparatus of thermodynamics.

The whole concept of energy is rather subtle and intricate in general.  In particular it can take many forms including:

  • Electromagnetic energy stored in electric fields
  • Magnetic energy stored in magnetic fields
  • Thermal energy stored in any material at a finite temperature
  • Potential energy of any mass in a gravitational field
  • Kinetic energy of any moving object
  • Relativistic energy of any massive object as given by Einstein’s famous formula E = mc2
  • etc.

However, for our purposes we only need to understand the elastic energy stored in common solids and we can approach this by considering the behavior of a common spring.  Stretch or compress a spring and it will store a certain amount of elastic energy which can be perceived by allowing the spring to return to its equilibrium length. Much the same type of behavior goes on in common solids.  Figure (1) gives a highly idealized but reasonably realistic picture of a solid material viewed at the atomic level.

Schematic diagram of a solid viewed at the atomic level

Fig. 1: Schematic diagram of a solid viewed at the atomic level. To a first approximation the atoms can be thought of as being held together by microscopic springs which account for the elastic properties of the material.

The atoms/molecules in a given solid are held together by atomic and intermolecular forces which arise from the rather complex electromagnetic interactions among the electrons and nuclei which make up the bulk of any material. Fortunately, near equilibrium and for small deformations these interactions behave in a linear fashion very much like the behavior of simple springs.  Thus as Robert Hook pointed out more than a century ago the restoring force tending to bind the atoms together increases in a linear fashion as they tend to separate from one another. Things get quite a bit more complicated at large deformations but that need not concern us here.

Referring to the upper diagram in figure (1) we see that a typical atom in the bulk of our hypothetical solid feels either tensile or compressive loads from all directions and much the same is experienced by all the rest of the atoms in the deep interior of the solid.  However, the situation is quite a bit different for those atoms at or near the surface as shown in the bottom diagram of figure (1).  Looking down into the bulk of the material they see that same forces as the bulk atoms do but now there is no material on top which creates a highly asymmetrical situation.  It is precisely this asymmetry that gives rise to the unique surface tension or surface energy of the solid.


A Word on Units

Perhaps one of the most confusing things about surface energies are the units they are expressed in so we take a quick break here to clear up this issue.  Going back to our spring, if we stretch it there arises an immediate force tending to return it to the un stretched length.  Current international convention expresses this force in the standard SI units[1] of newtons. All systems of units are essentially arbitrary but it is nonetheless important to settle on a common standard.  Thus the common SI units for force are the newton with the dyne and the pound also in use but not considered standard by the international community.  The newton is the canonical unit of the international scientific community and the dyne is a scaled down derivative.  The pound is an archaic holdover from the past but is still much in use in commercial transactions especially in the USA.

Focusing on the newton it is formally defined as the force required to accelerate a one kilogram weight resting on a friction free surface one meter per second per second.  That means that the weight increases its speed by one meter per second every second under the action of the applied 1 newton load.  We can also thing of the newton in more intuitive if less rigorous terms as the weight of a common apple at sea level.  Thus if you hold a standard sized apple it imparts a force of close to one newton on your hand.  In terms of pounds the apple weighs slightly more that 1/5 th of a pound and in dynes it weighs about 100,000 dynes.

With the concept of force now rigorously defined we move on to the concept of energy again using our standard apple as a prop. Force times distance is energy in the form or work. Let’s assume that the apple weighs exactly 1 newton.  If we raise the apple from the ground to a height of 1 meter we will have done 1 joules worth of work or putting it a little differently we will have increased the apple’s gravitational energy by one joule.

Getting back to our spring, it stores what is called elastic energy.  As stated above energy is force times distance and the restoring force of a spring is proportional to the extension so the energy stored in an extended spring is proportional to force times distance squared.  These ideas can all be compactly summarized in the following formulas:

F = -k x (Restoring force exerted by a stretched spring) (1)

F = Force in newtons
x = Displacement in meters
k = Spring constant in newtons/meter

The minus sign in Eq(1) indicates the force is always a restoring force tending to oppose any extension or compression.

The energy stored in the spring is the integral of Eq(1) from 0 to some extension d:

W = IFxdx = -(½)kd2 (Energy stored in stretched spring) (2)

Thus if our spring has a spring constant of 1 newton/meter and we extend it to a distance of 1 meter it will pull back with a force of 1 newton.  Also it will store an energy of ½ joule.

We can think of the spring as having a tension of 1 newton/meter which is just another name for the spring constant.  Turning now to the physical surface of a polymer such as nylon the springs binding the atomic units together would have a tension of about 8×10-25 newtons/meter and this is the basis of the so called surface tension or surface energy of this material.  Now in practice we do not deal with such impossibly small numbers so we need to scale things up a bit.  In the case of our nylon, one square meter of the surface will contain something like 5×1022 molecular bonds among the surface moieties or in terms of our simple spring model 5×1022 springs.  So we take the surface tension of our polymer to be the tension in a single bond times the total number of bonds in a square meter which for our nylon material comes to 40×10-3 newtons/meter.  This is still to awkward so we introduce the milli newton (abbr mN) which is 10-3 newtons so nylon now has a surface tension of 40mN/m (abbreviating the meter as m).  Well why stop here.  We can do a little algebra on the units and say that 40mN/m is the same as 40mN-m/m2 by multiplying numerator and denominator by m.  Now the mN-m we recognize as a mJ or milli joule and so our nylon can be thought of as having a surface tension (aka surface energy) of 40mJ/m2.  And it does not stop here.  Many folks dealing with surface tension measurements would rather not have to deal with the milli prefix and such huge surface areas as a square meter.  It is more convenient to scale down the force unit to dynes (10-5 newtons) and use square centimeters (abbr cm) instead of square meters.  Thus 40mN/m scales down to 40 dynes/cm.


Real Solid Surfaces

Now that we have pinned down suitable units for measuring surface energies we can have a closer look at real material surfaces.  The diagrams in figure 1 give a much better depiction of a liquid surface than they do for a solid.  Liquids have a high mobility and can always adjust their configuration to give a uniform surface of minimum surface tension.  Not so with solids.  The surface configuration of a solid depends sensitively on the thermal-mechanical loading conditions under which it was created.  Was the material cooled rapidly or slowly?  What type of loads if any were active during the cooling process?

More realistic depiction of an actual solid surface

Fig. 2: More realistic depiction of an actual solid surface

Figure 2 gives a more realistic depiction of what a typical solid surface looks like.  This figure depicts 4 typical surface flaws that can significantly alter the surface energy of any real solid:

  1. VOIDS: Materials that have been rapidly quenched may not have time to completely condense giving rise to voids which are a source of tensile stress that will alter the surface energy in their vicinity.
  2. INCLUSIONS: No material is 100% pure and contamination species have a strong tendency to migrate toward surfaces where they upset the normal packing and in many cases give rise to a local compressive stress.
  3. GRAIN BOUNDARIES/DISLOCATIONS: Nearly all crystalline and semi crystalline materials are polycrystalline in nature.  That is they are made up of an aggregate of a large number of small crystals all packed together in no particular order.  The boundary where two crystallites meet form what is called a grain boundary.  Further the misalignment of planes within the crystalline give rise to what are called dislocations.  These and other imperfections can give rise to local stress fields where they intersect a surface that again alter the local surface energy.
  4. CONTAMINATION: Of all the surface imperfections contamination layers have the profoundest effect on surface energies.  Real material objects sit around on benches in the lab or other platforms where they are subject to constant bombardment from all the contaminants and gases in a typical atmosphere not the mention the greasy fingers of human handlers.

Needless to say all of these considerations make the accurate measurement of the surface energies of solids a rather tricky business. But enough for now.  We take up this question in the next chapter.

The author invites any inquiries or comments.



            [1]The International System of Units (abbreviated SI from French: Le Système international d’unités). A bunch of folks got together and formed the General Conference on Weights and Measures, an organization set up by the Convention of the Metre in 1875, which succeeded in bringing together many international organizations to agree not only the definitions of the SI, but also rules on writing and presenting measurements in a standardized manner around the globe.



17. February 2014   8:30 am
Dr. K. L. Mittal, Dr. Robert H. Lacombe

Dr. K. L. Mittal, Dr. Robert H. Lacombe

We interrupt the normal flow of this blog on “SURFACES: THE INVISIBLE UNIVERSE”  to present a book review on a volume which should be of keen interest to all working in the field of atmospheric  plasma technology.  The volume in question is:


The volume contains 15 review articles ranging from surface modification with plasma printing  to the deposition of nanosilica coatings on plasma activated polyethylene.  Each article has been produced by leading experts in the respective topics and gives a truly authoritative examination of the subject matter under review.  A quick look at the table of contents reveals the remarkable scope of this volume.


  1. Combinatorial Plasma-based Surface Modification by Means of Plasma Printing with Gas-carrying Plasma Stamps at Ambient Pressure;  Alena Hinze, Andrew Marchesseault, Stephanus Büttgenbach, Michael Thomas and Claus-Peter Klages
  2. Treatment of Polymer Surfaces with Surface Dielectric Barrier Discharge Plasmas; Marcel Šimor and Yves Creyghton
  3. Selective Surface Modification of Polymeric Materials by Atmospheric-Pressure Plasmas: Selective Substitution Reactions on Polymer Surfaces by Different Plasmas; Norihiro Inagaki
  4. Permanence of Functional Groups at Polyolefin Surfaces Introduced by Dielectric Barrier Discharge Pretreatment in Presence of Aerosols;  R. Mix, J. F. Friedrich and N. Inagaki
  5. Achieving Nano-scale Surface Structure on Wool Fabric by Atmospheric Pressure Plasma Treatment;  C. W. Kan, W. Y. I Tsoi, C. W. M. Yuen, T. M. Choi and T. B. Tang
  6. Deposition of Nanosilica Coatings on Plasma Activated Polyethylene Films;  D. D.  Pappas, A. A. Bujanda, J. A. Orlicki, J. D. Demaree, J. K. Hirvonen, R. E. Jensen and S. H. McKnight
  7. Atmospheric Plasma Treatment of Polymers for Biomedical Applications;  N. Gomathi, A. K. Chanda and S. Neogi


  1. Atmospheric Pressure Plasma Polymerization Surface Treatments by Dielectric Barrier Discharge for Enhanced Polymer-polymer and Metal-polymer Adhesion;  Maryline Moreno-Couranjou, Nicolas D. Boscher, David Duday, Rémy Maurau, Elodie Lecoq and Patrick Choquet
  2. Adhesion Improvement by Nitrogen Functionalization of Polymers Using DBD-based Plasma Sources at Ambient Pressure;  Michael Thomas, Marko Eichler, Kristina Lachmann, Jochen Borris, Alena Hinze and Klaus-Peter Klages
  3. Adhesion Improvement of Polypropylene through Aerosol Assisted Plasma Deposition at Atmospheric Pressure;  Marorie Dubreuil, Erik Bongaers and Dirk Vangeneugden
  4. The Effect of Helium-Air, Helium-Water, Helium-Oxygen and Helium-Nitrogen Atmospheric Pressure Plasmas on the Adhesion Strength of Polyethylene;  Victor Rodriguez-Santiago, Andres A. Bujanda, Kenneth E. Strawhecker and Daphne D. Pappas
  5. Atmospheric Plasma Surface Treatment of Styrene-Butadiene Rubber: Study of Adhesion Ageing Effects;  Cátia A. Carreira, Ricardo M. Silva, Vera V. Pinto, Maria José Ferreira, Fernando Sousa, Fernando Silva and Carlos M. Pereira
  6. Atmospheric Plasma Treatment in Extrusion Coating:  Part 1 Surface Wetting and LDPE Adhesion to Paper;  Mikko Tuominen, J. Lavonen, H. Teisala, M. Stepien and J. Kuusipalo
  7. Atmospheric Plasma Treatment in Extrusion Coating:  Part 2 Surface Modification of LDPE and PP Coated Papers;  Mikko Tuominen, J. Lavonen, J. Lahti and J. Kuusipalo
  8. Achieving Enhanced Fracture Toughness of Adhesively Bonded Cured Composite Joint Systems Using Atmospheric Pressure Plasma Treatments;  Amsarani Ramamoorthy, Joseph Mohan, Greg Byrne, Neal Murphy, Alojz  Ivankoviv and Denis P. Dowling
Schematic two dimensional slice through a typical plasma stamp

Schematic two dimensional slice through a typical plasma stamp

A cursory glance at the above list readily gives one the impression that the applications of the atmospheric plasma technique are limited solely by ones imagination.  It is also clear that this short review will be able to cover only a small fraction of the material covered in this volume.  Quite likely the most innovative paper in the collection is the one on “Combinatorial Plasma-based Surface Modification …” listed as number 1 above.  This work attempts to take the process of plasma surface modification to a higher level through the use of “plasma stamps” which can be used to pattern a substrate with varying levels of plasma treatment in a single run.  A schematic diagram of a plasma stamp is shown in figure (1).  The substrate to be treated is patterned with an array of chambers using poly(dimethylsiloxane) PDMS as an insulator layer.  The resulting array is sandwiched between a porus metal mesh and an electrode.  The metal mesh in this case serves a dual purpose as a gas carrier and as an electrode.

The authors site a number of advantages of the plasma stamp configuration including:

  • Due to the small size of the plasma chambers it is easy to supply nearly unlimited volumes of gas to the active micro-plasmas which is very useful when performing film depositions as opposed to simply performing a surface modification.
  • Again due to the small cavity size the stamp can be rapidly filled using  a small amount of gas.  Thus the process is not only economical in the use of gas but the small chambers can be rapidly purged of unwanted oxygen which is a critical requirement when performing plasma nitrogenation treatments.
  • The small cavity size also allows reaction products created in the cavities that are not deposited to be swept away efficiently in the gas stream.  This is very useful in preventing fouling due to the redeposition of plasma  polymers.
  • Quite likely the most significant advantage of the plasma stamp technology is the fact that quite large arrays  of the plasma micro-cavities  can be created allowing for very efficient combinatorial studies of different  plasma treatments on a single substrate in a single run.  Thus one can easily imagine a 2 dimensional array where two different gas streams are independently introduced to the array from opposite sides of the inlet edge.  The streams will combine continuously across the entire array of cavities giving a well defined gradient of gas composition over the entire array.  Different cavities thereby receive different treatments depending on their location in the overall array.  The results can then be inspected by any of a number of surface analysis methods such as Fourier Transform Infra Red spectroscopy (FTIR) or Xray Photoelectron Spectroscopy (XPS).  Thus a large number of different surface treatments can be screened in a highly efficient manner.

Again the large scope of the volume does not allow us to comment on the other equally interesting articles. It should be clear, however, as mentioned above that the possibilities are limited only by the imagination.

27. January 2014   9:33 am
Dr. K. L. Mittal, Dr. Robert H. Lacombe

Dr. K. L. Mittal, Dr. Robert H. Lacombe

This issue of the SURFACE SCIENCE CORNER blog inaugurates a series of essays on the above mentioned topic concerning how we recognize the ubiquitous surfaces we look at all of our waking moments.  The answer of course is through the subtle apparatus of our visual neurology which is activated by the light coming at us from all directions.  The question then becomes what information is all this light carrying to our visual cortex.  However, before we try to unravel this question we need to examine in a little more detail the nature of the light that is being reflected into our eyes from all directions.

Electromagnetic spectrum on a logarithmic scale

Electromagnetic spectrum on a logarithmic scale

The light coming at us is part of what is called the electromagnetic radiation spectrum the entire extent of which is displayed in figure (1).  The striking thing about this figure is the enormous range of the spectrum covering some 20 orders of magnitude on the logarithmic scale shown in the figure.  To print the entire figure on a linear scale would require a sheet of paper extending out to the edge of the solar system assuming 0.1 mm of sheet per 10Hz of frequency.  The second remarkable feature of this diagram for our purposes is the fact that the range of frequencies of visible radiation which is what our eyes detect is less than 2% of the range.  Though we detect a limited number of mechanical and thermal properties of surfaces through direct touch, the preponderance of our awareness of surfaces comes from reflected radiation.  Going by the figure we see that our eyes are missing some 98% of what is potentially being reflected at us.

So what are we missing in particular.  Lets consider first the infrared region from roughly 1011 to 1014 Hz.  This is radiation contributed by the incessant atomic and molecular motions of the atoms and molecules which make up all matter.  To our eyes all surfaces lying at rest are perfectly still certainly at the macroscopic level.  However, consider for example a carbon atom associated with a particular bit of organic contamination residing on some apparently undisturbed surface at room temperature.  An elementary thermodynamic calculation indicates that far from sitting still such an atom is vibrating in place at an average velocity near 300 m/sec.  This motion combined with the motion of all the other atoms and molecules on the surface gives rise to infrared radiation which is beamed in all directions and is entirely invisible to our eyes.  We can, however, detect the molecular vibrations of organic molecules on surfaces with the aid of specialized infrared spectrometers so we know they are there even though we cannot see them.

What else are we missing?  Going to higher frequencies in the range of 1016 to 1019 Hz we find ourselves in the land of the X-rays which reflect information on atomic and molecular structure on the scale of a few angstroms or roughly 10-10 cm.  If we could detect this radiation we would be able to see the atomic and molecular packing of all the species at the surface.  Things like crystal structure, grain boundaries, dislocations and assorted other types of contamination and defects lying on the surface.  In addition we would be able to detect the inherent roughness of the surface at the atomic and molecular scale.  What to the eye would appear to be a perfectly smooth surface when observed with X-rays would appear to be quite rough an rugose.  Such variable surface topography can have quite significant effects on common properties such as surface wettability.

Going to yet higher frequencies we find ourselves in the range of the gamma rays which live in the range from roughly 1019 to 1020 Hz.  The gamma rays allow us to peer into the goings on in the atomic nucleus some 1000 times smaller than the typical atom.  In particular, some nuclei are unstable and can disintegrate into smaller nuclei giving off gamma rays and other particles in the process.  Most common materials are not radioactive but some do have contamination level concentrations of radioactive species which can give off barely detectable amounts of radiation.  Now one might not expect that sub detectable levels of radiation would be of much concern to the practical product engineer manufacturing some wholly macroscopic device for industry.  However, the world of surfaces can be quite subtle and engineers in the microelectronics industry got an elementary lesson in nuclear physics from that most common of common materials lead.  It turns out that lead can harbor contamination levels of radioactive species the activity of which are quite invisible to our eyes as explained above.  Lead has a long history of being used to make electrical connections in the electronics industry going back to at least the mid 19th century.  It so happened that in the early 1980′s lead solder was being used to connect sensitive memory chips to ceramic substrates.  The memory chips were of a special kind which utilized the very high resistivity of single crystal silicon to trap a small amount of charge in a small cell which formed the basis of an elementary unit of memory.  A cell containing charge served as a boolean “1″ and an empty cell represented a boolean “0″.  All well and good but all the cells had to be connected to the remaining computer circuitry using metal lines and interconnects and of course that old work horse lead served as one of the interconnect materials.  The reader can now well guess what happened.  The radioactive contaminant species in the lead would decay from time to time giving off not only a gamma ray but also a highly charged alpha particle.  The alpha particle was the main mischief maker.  Carrying a charge of +2 it does not travel very far in ordinary matter but where it does go it leaves behind a trail of ionization which can momentarily turn a highly resistive material like single crystal silicon into a good conductor along the path of the alpha particle.  One can easily imagine a wayward alpha particle crashing into one of the silicon memory cells causing a charged cell to discharge along the ionization path left by the alpha.  A memory register of the computer has now been randomly and irreversibly changed which is not good from the point of view of programming logic.  If the affected register happened to contain an important logic instruction the result would easily lead to a serious programming error or simply machine lockup.  The field engineers came to know this type of problem as a “soft” error since the ionization trail left by the alpha particle would quickly dissipate leaving the affected memory cell quite unharmed.  Thus any attempt to locate the source of the error would be futile since no permanent hardware malfunction was involved.  Such so called “soft” errors are the worst kind from the point of view of the field engineer.  They come at random from seemingly nowhere and the culprit escapes without a trace.  How this problem was eventually solved is a story for another time but for now it simply illustrates that what we cannot see coming off a surface can indeed bite us in wholly unsuspected ways.

If we now go to the lower frequencies from 104 to 1011 Hz we come across the radio and microwave bands.  If we could detect these frequencies we would be able to see the myriad electronic and polarization currents which endlessly flow in all materials and give rise to a number of phenomena which affect us in various ways even though we cannot visually see them.

All this will be covered in future issues of the SURFACE SCIENCE CORNER blog.

The author invites any inquiries or comments.

Category: SOLAR
11. December 2013   7:07 am
Khoren Sahagian

Khoren Sahagian

Some plasma physicists have proposed an alternative comet theory.   In their model a comet may actually be a negatively charged body created from the violent collision of large masses during planet formation.  As these charged bodies accelerate towards the sun they interact with solar winds in an extravagant display of plasma discharge.  Water or hydroxyl compounds would be the explained byproduct from the combination of the oxygen present in silicates with the protons being ejected from the sun.

Comet C/2012 S1 (ISON)

Comet C/2012 S1 (ISON) taken from TRAPPIST national telescope at ESO’s La Silla Observatory on the morning of Nov. 15, 2013. (Liège, Belgium)

Scientific observation of comets have been recorded for more than a century. In the late 20th century the scientific community reached its first consensus of the comet’s theoretical constitution.  Fred Whipple coined the hypothesis “dirty snowball” presenting the astrological object as an amalgamation of ice, rock, and star dust.  When this body nears the sun a brilliant tail emerges resulting from the sublimation of ice within the comet nucleus.

Yet some would argue that there are a few unexplained attributes of a comet to note.  First is that the coma generally always remains spherical.  This would not necessarily be expected from asymmetric jets of ice emanating from the core but might be sustained by a strong electrical field.  Second is a low constitution of water sampled in missions probing the surface and tail of a comet. One such program “Stardust mission” sent a space craft equipped with an aerogel net through the path of a comet tail.  Upon return the ground based team was surprised to find an assortment of complex high temperature crystalline formations; portions of which were anhydrous structures.  This fundamentally challenges the accepted theory as a low temperature snowball.  There are some that even liken a comet surface to objects on Earth that have become ablated by plasma discharge.  Search for SEM images and decide for yourself.

Comet; plasma or ice?

Category: Cleaning / Glass
25. November 2013   2:31 am
Mikki Larner

Mikki Larner
Belmont, CA

Is it true that one of the first commercial uses of plasma ashing was to ablate fish to expose mercury contamination?

Sitting around the lunch table the other day, our chemist expanded on an early use of plasma for one of the first commercial applications: ashing fish to expose mercury (or other metals) to evaluate the impact of contamination from industry. While it seemed like a logical use of the technology, I couldn’t get my head around this as one of the first commercial applications….so did a bit of research and finally reached out to one of the experts in the field of vacuum technology: Donald Mattox. He confirmed that low pressure plasma ashing has been used for over 50 years for trace element analysis – an early use of replacing wet chemistry!

Don sent the following citations confirming the use:

1962: C. E. Gleit and W.D. Holland, “Use of electrically excited Oxygen for the low temperature decomposition of organic substrate” Anal Chem. Vol. 34 (11) pp 1454-1457

1977: M. Velodina, “Quantitative determination of Mercury in Organic materials by means of a low temperature, high frequency discharge plasma in oxygen” Analytical Letters 10(14) 1189-1194

And Don added one of his favorite Oxygen plasma cleaning stories (from his book “Foundations of Vacuum Coating Technology”)

When preparing to aluminize the Palomar mirror, John Strong notified the mirror polishers that he would be using a new cleaning technique using ‘a special fatty acid compound with precipitated chalk.’ When he arrived the ‘special fatty acid compound’ was Wild Root Cream Oil hair tonic (ad jingle: ‘You better get Wild Root Cream Oil, Charlie; It keeps your hair in trim; Because it’s non-alcoholic, Charlie; It’s made with soothing lanolin’). He stated, ‘In order to get glass clean you first have to get it properly dirty.’ The oil residue was ‘burned-off’ using an oxygen plasma in the vacuum deposition chamber. (From The Perfect Machine: The Building of the Palomar Telescope, Ronald Florence, pp 382-386, HarperCollins, 1994).

I’m assuming that the following US Patent from 1978 helps corroborate his story: 4088926: Plasma Cleaning Device (for cleaning organic contamination on optical surface) 

I found this quite interesting and did some additional research that I would like to share with my readers:

Plasma, atmospherically, has been used professionally by museums and NASA to remove carbon contamination or char, selectively, as a restoration technique for fine art.

Before and after image of artwork cleaned by atomic oxygen.


Some later work of interest was published by Texas A&M: Used RF plasma to selective remove inorganic mater from paint and prevent damage to the substrate (rock). Organic components can then be analyzed and dated.
1992: Direct Radiocarbon Dating of rock Art. Radiocarbon, V 34, No. 3, 1992, P 867-872. J. Russ, M. Hyman and M. Rowe, TAMU.

I could go on and on and on… Plasma truly offers us a tremendous tool box for modification of myriad materials!

28. October 2013   10:36 am
Dr. K. L. Mittal, Dr. Robert H. Lacombe

Dr. K. L. Mittal, Dr. Robert H. Lacombe

We interrupt the normal flow of essays on the invisible universe of surfaces to present a short review of a newly available volume which should be of strong interest to all who are interested in the modification of polymer surfaces using plasma technology. The volume in question is:

THE PLASMA CHEMISTRY OF POLYMER SURFACES: Advanced techniques for Surface design

By: Jörg Friedrich, (Wiley-VCH Verlag GmbH & Co. KgaA, Weinheim, Germany, 2012)

By way of disclosure we should point out that Prof. Friedrich is a good friend and colleague of ours and a long time participant in the MST CONFERENCES symposia series on Polymer Surface Modification and Metallized Plastics.  We can therefore personally vouch for his outstanding expertise in the topic under discussion.

The volume is indeed a major undertaking to categorize and review the truly vast topic of plasma processing as applied to polymer surfaces.  The table of contents gives a glimpse of the wide scope of topics covered:

  • Interaction between Plasmas and Polymers
  • Plasma
  • Chemistry and Energetics in Classic and Plasma Processes
  • Kinetics of Polymer Surface Modification
  • Bulk, Ablative and Side Reactions
  • Metallization of Plasma-Modified Polymers
  • Accelerated Plasma-Aging of Polymers
  • Polymer Surface Modification with Monosort Functional Groups
  • Atmospheric Pressure Plasmas
  • Plasma Polymerization
  • Pulsed-Plasma Polymerization


The volume is clearly directed toward the polymer chemist with emphasis on the creation and behavior of “functional groups” which tend to dominate the chemical behavior of polymer surfaces. The ubiquitous hydroxyl (-OH) group is a prime example. The introduction of such groups to a highly hydrophobic polypropylene surface, which repels water and most common inks, can almost magically turn it into a hydrophilic surface which can be easily written on with a ball point pen.  There are of course many more such “functional groups” and the volume goes into great detail as to how they can be created, how they affect the surface chemistry and how they behave over time.

The wide scope of the volume makes it impossible to review all of the topics listed above in any detail so we will focus instead on the chapter on Atmospheric Pressure Plasmas which is likely to be of most interest to readers of this column.

The section on atmospheric plasmas begins with the initially astonishing assertion that more than 99% of the universe is in the plasma state.  To us earthlings bound to a highly condensed rock of a planet this seems hardly credible as our local experience of the plasma state is confined to relatively rare phenomena such as lightning, auroras and flames.

However, simply looking up at the sky during daylight reveals the sun which is a seething cauldron of plasma gasses weighing more than 100 million times the mass of the earth.  The sun of course is just one of a near infinitude of stars that make up the universe so the 99% estimate is actually highly conservative.

Having established the ubiquity of the plasma state in the universe the author then goes on to review a wide range of recent work on the use of atmospheric plasmas for the surface modification of polymers.  Of particular interest is the comparison of atmospheric plasma jets, low pressure oxygen plasma and dielectric barrier discharge plasma.  Figure 10.1 from the text is particularly interesting.  It compares the relative effectiveness of these techniques in introducing oxygen into the surface of polypropylene as a function of application time.  Interestingly the data show that after about 6 seconds the atmospheric plasma jet technique delivers nearly twice the oxygen concentration to the surface than either the low pressure oxygen or dielectric barrier method.  This easily explains why the plasma jet approach is highly favored in manufacturing applications where a rapid process is necessary to achieve required manufacturing volumes.

A second most interesting application is the use of atmospheric plasma for polymer deposition onto surfaces.  This is an apparent attempt to apply a polymer coating to a surface which in conventional low pressure plasma processes is carried out by the plasma polymerization method.  By the conventional approach plasma activated monomer species are allowed to condense on the target surface where they can then react to form polymer networks.  This approach to applying a polymer coating to a surface faces a number of problems due to the high reactivity of the plasma activated monomers.  One on the surface they could react in a wide variety of ways giving rise to a complicated morphology which is difficult to understand.  In addition the activated monomers tend to condense not only on the target substrate but on all the vacuum chamber surfaces.  Over time this can give rise to coated chamber surfaces which can flake off and cause a contamination problem.

In order to circumvent the problems encountered with the conventional plasma polymerization method the atmospheric pressure approach injects a fully formed polymer species into the plasma stream as an aerosol.  The plasma gas activates both the polymer and the target surface allowing the polymer to form a strongly adhered coating.  Thus the prefabricated polymer goes down as a complete entity and there is no vacuum chamber to be coated and build up possible contamination effects.

These and other innovative plasma techniques are reviewed in some detail in this quite comprehensive volume.  We can highly recommend this book as a reference work for all those engaged in the research and development of plasma surface treatment technologies.

Do you have an offshoring or re-shoring experience to share?
14. September 2013   5:38 pm
Khoren Sahagian

Khoren Sahagian

On September 13th I asked a Product Realization Group  panel whether they shared the perspective of US manufacturing as being risk-adverse and slow to adopt new manufacturing innovations.  I was surprised as to the answers I received.

Here in the Silicon Valley and elsewhere the culture of off-shoring is changing; especially in the high technology arena.  North American manufacturing still offers better inventory control, higher yield, better performance, and stronger rate /culture for innovation.  High technology development does not just apply to the product but to all aspects of product realization.  A locally integrated and culturally aligned supply chain enables quick response and a faster pace for adopting innovative practices.

A few hidden costs of overseas manufacturing are language barriers, time zone delays, supply chain management, and breaches in intellectual property.  The later involves overseas shops as transferring engineering diagrams, tooling, and shop floor practices among direct competitors!!  Many also find a greater resistance to change in China.  Furthermore the regulated spaces there are relatively ill-defined or inconsistent.

There is also a common misconception amongst students that manufacturing jobs are not good but this is simply untrue.  The educational systems should paint new images of modern manufacturing as cool and clean which are its true colors today.  Plasmatreat and Plasma Technology Systems are glad to be a part of new manufacturing innovation.


PRG – product realization group:


7. August 2013   1:11 pm
Dr. K. L. Mittal, Dr. Robert H. Lacombe

Dr. K. L. Mittal, Dr. Robert H. Lacombe


As outlined in last month’s issue of the SURFACE SCIENCE CORNER blog the next several issues will be devoted to exploring the invisible properties of surfaces that we are rarely able to detect by simple inspection but nonetheless have rather profound consequences for the way surfaces behave. However, if we are going to uncover the hidden aspects of surfaces we are going to have to look at the atomic and molecular level which is where all the secretive phenomena reside. Before doing this it would be very helpful to look at the behavior of bulk solids at the atomic and molecular level since what goes on at surfaces is basically a subtle variation of what goes on in the bulk.

So what do we really know about the nature of solids? I wrote an encyclopedia article a while back dealing with Basic Concepts in Adhesion Science and as part of that article I tried to introduce some fundamental concepts on the nature of solids at the atomic and molecular level. The editor wrote back that this section could be eliminated since everyone knows what atoms are all about. Really! I wrote back that yes most people have the 19th century view that atoms are some kind of tiny ball bearings and that solid matter is just a colossal agglomeration of these tiny beads held together by some kind of atomic forces. However, how many people are aware that atomic matter is mostly an exceedingly intense electromagnetic field?

Before entering into the realm of quantum electrodynamics it would be a good idea to cycle back to the late 19th century and investigate the far more intuitive concepts of what defines a piece of solid matter. At that point in time the prevailing scientific notion of a solid was anything that had a significant modulus. So what is a modulus? This concept goes back to the 17th century British physicist Robert Hooke who first stated his law of elasticity in 1660 as a Latin anagram, whose solution he published in 1678 as Ut tensio, sic vis; literally translated as: “As the extension, so the force” more commonly “The extension is proportional to the force”. Basically if you pull on a strip of some material the amount of extension achieved will be directly proportional to the level of applied force and the constant of proportionality is what we call the modulus of the material. For our purposes we need to make this concept more quantitative and a simple example will help in this regard.

Say we have a thin strip of a typical engineering thermoplastic such as poly-styrene and apply our stretch test to it. For quantitative purposes we let our strip be 10 centimeters (10cm or 0.1 meter (0.1m)) long by 0.5cm (0.005 m) wide by 0.1cm (0.001 m)thick. Common thermoplastic materials are fairly stiff materials at room temperature so we will need to put it in a tensile test machine which will apply a tensile load to it measured in Newtons. For consistency we need to keep our units within the commonly accepted SI system (The Système international d’unités (SI), or International System of Units, defines seven units of measure as a basic set from which all other SI units are derived) so we use the Newton as the measure of force abbreviated as Nt which in magnitude amounts roughly to the weight of an apple or close to 200 grams. Now our tensile tester applies a force over the entire cross section of our sample which has some finite area and thus applies a stress or force per unit area to the test specimen. Again for consistency with the SI system of units we measure the stress in Newtons per square meter (abbr. Nt/m2) or Pascals the SI derived unit of pressure, stress, modulus and tensile strength, named after the French mathematician, physicist, inventor, writer, and philosopher Blaise Pascal (1623-1662).

Upon applying a load to the sample strip we note that it extends by some amount Δ. To be in conformity with common practice we divide the amount of extension Δ by the original length of the sample “l” to get a dimension less quantity ε = Δ/l called the strain. We can now recast Hook’s law as the stress is proportional to the strain where the constant of proportionality is called the modulus E also called the Young’s modulus in honor of Thomas Young an English polymath notable for scientific contributions to the fields of vision, light, solid mechanics, energy, physiology, language, musical harmony and Egyptology (a polymath indeed!). Our formula can now be compactly expressed by the following simple expression:

σ = E x ε or stress is the modulus times the strain.

We use the traditional Greek symbols sigma for the stress and epsilon for the strain. The stress has units Nt/m2and since the strain is dimension less the modulus must also have units Nt/m2.

Now back to our test strip. Say we apply a load of 150 Nt with the tensile tester and note that the sample stretches by 1 millimeter or 0.001 m. since our sample is 0.1 m long we have induced a strain of 0.001/0.1 = 0.01 or by 1 percent of its original length. The cross section of our strip was 0.005 m x 0.001 m = 0.000005 m2 and the applied load was 150 Nt so the applied stress was 150/0.000005 Nt/m2 = 30,000,000 pascals (abbr. Pa). We note that the pascal is a rather small unit since it amounts to roughly 200 grams spread over one square meter. The pascal amounts to roughly one ten millionth of the atmospheric pressure which is squeezing on every one of us living reasonably close to sea level. Rather than deal with such an inconvenient unit and have to carry all those cumbersome zeros around we define a new unit called the mega pascal or 1 million pascals (abbr. MPa). The applied stress is thus 30 MPa. Going back to Hook’s law as formulated above we see that the modulus of our sample has to be the stress divided by the strain or 30 MPa/0.01 = 3000 MPa. Since we abhor carrying around a lot of zeros we define yet a new unit called the giga pascal (abbr. GPa) which is 1000 MPa or 1 billion pascals. The modulus of our sample thus comes to a nice round 3 GPa. This is quite close the modulus of all common polymers from plastic cups and dinner plates to plastic eye glass lenses.

At this point our understanding of the elastic properties of solids is roughly where things stood at the end of the 19th century. All solids can be defined by their elastic modulus which is determined experimentally by stretching a test piece and dividing the applied stress by the induced strain. All was not well, however, since the theory was based on the hypothesis that all solid materials were part of continuum which looked the same at all scales of length. This notion was already under attack by the late 19th century since many of the ideas associated with chemical reactions were best explained by assuming that elements were discreet entities. By the mid 20th century the atomic theory of matter was established beyond any reasonable doubt and not only that but it was further determined that atoms themselves were made up of still smaller units called electrons, protons and neutrons.

So to explore the hidden nature of surfaces or bulk matter we have to look at the atomic level or at spatial separations of
10-10 m or 0.1 nanometers (abbr. nm). This means we must leave the classical mechanics of 19th century continuum theory behind and delve into the nether world of quantum mechanics. Now we certainly do not have the space here to get into the details of quantum theory but we can present the relevant results fairly succinctly. Those interested in a more detailed discussion are invited to go on the MST CONFERENCES web and look up Vol. 2 No. 3 of the conference newsletter under the heading Hamaker Theory or at Atomic Distances the World Follows the Rules of Quantum Mechanics (

By the time the 20th century was half over it was well established that the properties of all common materials were fundamentally governed by electromagnetic interactions. Classical electromagnetism was well explained by Maxwell’s Equations and quantum theory explained how the electromagnetic interactions between the fundamental particles such as the proton and the electron served to bind atomic matter together. What was happening within the atomic nucleus was still problematic since an entirely different set of forces were operating but these forces held the typical nucleus together so strongly and within such a small volume that for the purposes of understanding the chemical nature of matter the nucleus could be assumed to be a point mass and a source of positive charge which attracted and bound the electrons. To get some idea of the sizes and distances involved consider that a typical atomic nucleus is on the order of 3.4×10-15m , the electromagnetic radius of the electron comes to about 2.8×10-15m and within a typical atom electrons and protons are separated on average by roughly a Bohr radius which is close to 5.3×10-11m. From these numbers one readily perceives that atomic matter amounts to relatively small point masses separated by huge distances. To put the matter in better perspective simply scale the numbers up by a factor of 1013. The nucleus then would have a radius of 3.4 cm, which is roughly a small orange, and the electron would come to 2.8 cm, or a crab apple, and they would be separated on average by a distance of 530 meters or roughly half a kilometer! Thus the world at the atomic level seems to be mostly empty space. Or is it? What is it that fills all that space and gives the impression that atomic matter is something sturdy and substantive? The answer as stated above is the electromagnetic field. In essence it is the electromagnetic field between the charged proton and electron that provides the substance of the hydrogen atom or any other atom for that matter. It turns out that the square of the electric field is proportional to an energy density typically measured in joules per cubic meter (abbr. J/m3) where the joule is the SI unit of energy named in honor of James Prescott Joule (1818-1889 an English physicist and brewer). Now we know from our elementary physics courses that the joule is formally defined as the force of 1 Nt acting through a distance of 1 m. Thus if you lift an apple off the floor and raise it to a height of 1 m you have expended 1 joule of energy in the process. We see then that the joule has the fundamental units of Newton-meter (abbr. Nt-m). Thus the units of energy density can be written J/m3 = Nt-m/m3 = Nt/m2 or putting it succinctly the units of energy density are the same as the units of stress or modulus. So the energy density associated with the electric field generated by the electron and proton in the hydrogen atom gives rise to a modulus of some sort.

It turns out that the electric field close to an electron or proton is exceedingly intense. The newsletter article mentioned above derives a simple formula for estimating the energy density associated with the electric field close to an electron and the results are summarized in the following table:

TABLE 1: Electromagnetic energy density in a neighborhood of an electron at representative atomic distances estimated from Eq.(10) (Newsletter article

4.0 2.8 Roughly the modulus of a thermoplastic i.e. polystyrene
1.37 208.0 Modulus of metal such as steel
1.0 732.0 Modulus of refractory material like silicon carbide or diamond

Table 1 clearly shows that the electromagnetic energy density associated with the electron’s electric field can account, at least in a heuristic fashion, for the elastic properties of all forms of common matter.

This seems like a rather remarkable conclusion since it is also the electromagnetic field associated with sunlight that warms us on a sunny day. Even though solar radiation can warm us it certainly does not appear to have any appreciable substance. Nevertheless sunlight also presses down on us with a very week but non zero pressure. In particular if you step out on a sunny day and hold your hand facing the sun it receives a thermal input of about 7 watts which is just enough to detect. What is not detectable but still present is the fact that the solar radiation exerts a pressure on your hand of roughly 4.5×10-6 Pa which is so small that you would need a balance with nano Newton resolution to detect it. So it is essentially a matter of intensity that determines the apparent behavior of electromagnetic fields. The field close to an electron exerts a reactive force so intense that it closely emulates the behavior of solid matter whereas the field associated with common light is so week it seems to have no substance at all.

This basic fact was brought home to me many years ago as a graduate student doing a project on particle physics. At that time long ago we were analyzing bubble chamber tracks from proton collisions generated by the Zero Gradient Synchrotron at Argonne National Labs. We were essentially using highly specialized camera equipment to analyze the tracks of high energy protons as they traversed the chamber. Mostly what you saw on a given exposure were the slightly curved tracks of the protons as they wizzed through the chamber. The tracks had curvature since the chamber was immersed in a strong magnetic field which exerted a force perpendicular to the travel direction of the proton. Every dozen or so frames however, there would be an event whereby one of the incident protons would smash into one of the hydrogen atom protons which filled the chamber as the entire contents of the chamber was liquid hydrogen. A collision event would appear as a proton track suddenly terminating at a point in the chamber and a number of separate tracks would appear going off in all directions. Two protons essentially collided and exploded into a number of other particles indicating that the proton is not an elementary particle but made up of still smaller entities. What caught my eye on occasion, however, was the fact that some of the collision events gave off a phantom particle that would leave no track near the collision point. The only way one would know that such a particle had been emitted is that at some distance from the primary collision point a pair of tracks would appear emanating from a single point and veering off in separate directions leaving a V shaped imprint behind. That phantom particle was in fact a bit of exceedingly high energy electromagnetic radiation called a gamma ray. These gamma rays were so energetic that they were unstable and would spontaneously self annihilate by turning into an electron and a positron. This clearly illustrated the substantive nature of the electromagnetic field by directly turning radiation into two elementary particles.

This basically ends our introductory tutorial on the electromagnetic nature of solid matter. The stage is now set to explore further the consequences of what we have uncovered in relation to the behavior of solids at their surfaces and how they interact with other solids. We commonly deal with plasmas to modify surfaces and indeed the energy density of plasmas is also locked up in intense electromagnetic fields. So we will see that the electromagnetic field has a few more tricks up its sleeve as will be uncovered in future issues of THE SURFACE SCIENCE CORNER.

20. July 2013   4:19 pm
Khoren Sahagian

Khoren Sahagian


It is understood that 99.999% of the universe is in a state of plasma.  The universe is primarily composed of charged species and subatomic particles.  The text book definition of plasma is that it is the 4th state of matter.  I believe that this definition is fundamentally challenged.  Matter originates in plasma and when energy is lost the celestial bodies and cohesive structures that make up our universe are formed.  In this sense plasma is better described as the first state of matter seeing as this is where chemical compounds begin.


But why recalibrate our perception of plasma?  Here on Earth most of the industrial plasma (those used to modify material) are employed in the cleaning and activation of surfaces.  Essentially this involves the ablation of weakly bound surface contaminant. But I would categorize this process description as a “4th state” mindset because the tendency is to focus our attention on the solid compounds that originate on the solid and are removed by an energized state of plasma.  While this does occur I feel there is greater accuracy in envisaging plasma as a 1st state of matter putting greater emphasis on the fragments formed in plasma that come down to join the surface.  The precise chemical compounds that are added are what enable adhesive bonding or reaction.  Those species originate in the plasma and form covalent bonds to the substrate after the source of energy is taken away.