Category: AUTOMOTIVE
18. December 2014   2:24 pm
Andy Stecher

Andy Stecher
Elgin, IL

I recently returned from Detroit, discussing projects with OEMs and tier suppliers. This really is an exciting time in automotive: CAFE standards, lightweighting, reduction of carbon emissions, and increased occupational safety requirements call for new materials selections and advanced production methods.

We very much appreciate all of our wonderful automotive customers and partners, and we are always looking for ways to serve you better. Along those lines, I wanted to talk to you a bit about what makes servicing our automotive clients so special for us.

Perhaps you, like me, occasionally receive communications from other companies claiming plasma knowledge in the automotive business. While their technologies may appear somewhat similar to ours, nothing could be further from the truth! Plasmatreat is proud to be the only company that offers:

  • ISO 9001, CE, and UL/CSA certifications
  • A proven track record of successful value contribution in the automotive industry for 20 years
  • Fully integrated, automated automotive plant solutions supported by a global service team
  • In-the-field personal technical engineering support plus 3 different laboratories in North America for individual testing
  • Robust R&D services that continually expand our growing list of industry solutions
  • Award-winning industrial product design technology
  • What we believe to be the most diverse private-sector atmospheric and low pressure plasma equipment suite and surface chemistry offerings in the world
  • Exceptional product reliability and customer service. (As one satisfied customer just told me, “I am extremely impressed by Plasmatreat’s level of service. The extra effort with the in-plant training and support is above and beyond expectations.” We are pleased to have many other similar comments in our files!)

 

In short, we here at Plasmatreat take great pride having established the “gold standard” of our industry; we call this Plasmersion!

We hope to have many opportunities in 2015 and beyond to demonstrate Plasmersion to you first-hand. If you ever have any questions about our value proposition, please do not hesitate to get in touch – we’d be delighted to speak with you.

You may also wish to check out our featured case study article in the January 2015 issue of Engine Technology International magazine, which discusses why Plasmatreat’s technology is both earth-friendly and highly effective for various engine manufacturing processes. Click here to read it.

Best wishes for a wonderful holiday season and a productive and prosperous 2015!

Recommend
11. December 2014   8:37 pm
Dr. K. L. Mittal, Dr. Robert H. Lacombe

Dr. K. L. Mittal, Dr. Robert H. Lacombe

  X-RAY PHOTOELECTRON SPECTROSCOPY – XPS

 In last months issue of the INVISIBLE UNIVERSE essay series we established two fundamental reasons why surfaces remain essentially invisible to us even though they are the most common entity with which we interact every waking moment.  The first reason was the fact that our eyes only detect about 2% of the radiation that any given surface can beam at us making us rely on special experimental technologies such as XPS in order to discern otherwise invisible surface structures.  The second reason is that surfaces are subject to the laws of quantum mechanics which determines not only the details of surface structure but also governs the interactions of all forms of radiation with the atomic and molecular entities of which all surfaces are composed.

 Last month’s discussion laid out the fundamental precepts of quantum theory and we now apply them to understand the workings of the XPS experiment.

 As pointed out in last months discussion the essence of the XPS experiment is when an X-Ray ( a high energy photon) impinges on a surface causing an electron ( called a photo electron due to being ejected by a photon) to be ejected from the surface.  The laws of quantum mechanics insure that the ejected electron will come off at a very specific energy which uniquely identifies the type of atom from which it was ejected.   This fact is what makes the XPS experiment so useful in probing the chemistry of any given surface.  To understand this better we need to understand a little more about how quantum mechanics determines the atomic and molecular structure of all matter.

 The story basically begins with a series of  experiments carried out by Hans Geiger and Ernst Marsden1 over the time period from 1908 to 1913 where they were bombarding gold foils with alpha particles and detecting how the particles were scattered.  Up to that time no one really had any idea as to what to expect.  The mechanics of materials at the time assumed all matter to be a sort of homogenous continuum and on the basis of that assumption the best guess was that the particles would pass straight through or be somehow absorbed by the material.  What they found instead was that a fraction of the particles were scattered through large angles up to 180 degrees. Ernst Rutherford the Director of the Cavendish Laboratory commented on these remarkable results as follows:

 It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you. On consideration, I realized that this scattering backward must be the result of a single collision, and when I made calculations I saw that it was impossible to get anything of that order of magnitude unless you took a system in which the greater part of the mass of the atom was concentrated in a minute nucleus. It was then that I had the idea of an atom with a minute massive centre, carrying a charge.

 This remarkable insight led to the notion that atomic matter was something like a miniature solar system where electrons orbit a central nucleus much as earth and the planets orbit the sun.  This all seemed quite plausible for a while since it was known that positive and negative particles attracted each other by an inverse square power law much as the sun attracts each of the planets.  This notion quickly fell apart, however, since the then well known laws of electrodynamics unambiguously predicted that such a system would be unstable.  Classical electrodynamics clearly predicted that the orbiting electrons would quickly radiate away their orbital energy and fall into the central nucleus.

 The conundrum of how atomic matter manages to exist was solved only when physicists attacked the problem using the principles of quantum theory.  In particular Schrödingers equation could be solved exactly for hydrogen, the simplest atom of all, and the resulting solution provided a remarkable template for working out the atomic structure of the rest of the periodic table and in fact provided the theoretical foundation of why the periodic table exists.  The picture that emerged is that a typical atom consists of a small ultra dense positively charged nucleus surrounded by a cloud of electrons where each electron occupies what is called a quantum eigenstate with a very sharply defined quantized energy level.

 At this point we have to wonder what on earth is a quantum eigenstate?  To proceed further we need to pass through the looking glass into the nether world of atomic matter as describe quantum theory which is the most consistent and accurate description that we have.  Ostensibly, from the work of Rutherford and his colleagues, the typical atom consists of negatively charged electrons somehow circulating around a positively charged center being held in a tightly contained cluster by the inverse square Coulomb interaction.  The first thing to note is that the notion that the electrons circle around the nucleus in a manner similar to the way the planets circle around the Sun is completely out the window.  The Heisenberg Uncertainty Principle in particular dictates that we cannot know, even in principle, where inside the atom a particular electron might be at any given time.  All we can know is where the electron tends to spend most of its time, i.e. the probability of finding the electron at any given point at any given time instant.  This comes about because quantum theory dictates that the state of any given electron is prescribed by what is known as a wave function which must be a solution of Schrödingers equation which leads us into the realm of some fairly abstract mathematics dealing with the solutions of differential equations.  Sorting through the details of solving Schrödingers equation would lead us rather far into the hinterlands of differential equation theory but we don’t have to make that journey to appreciate the final result.  The basic results that emerge from laboring through the details are as follows:

Lacombe1

Figure: Approximate energy level diagram for atomic matter

 1) The time independent solutions of Schrödingers equation which describe the atom at equilibrium are solutions to what is commonly known as an eigenvalue problem.

 2) The typical eigenvalue problem states that the differential equation under consideration can have solutions only if certain key parameters have very specific values which arise out of the general eigenvalue solution procedure.

 3) For the case of a simple atom such as hydrogen the key parameters are the spatial coordinates of the orbiting electron or what are commonly called its degrees of freedom.  In Cartesian coordinates these are the (x,y,z) position values.  For atoms it is more convenient to use spherical coordinates so instead of  (x,y,z) we use (ρ,θ,φ) i.e. a radial coordinate plus two angular coordinates.

 4) Thus, for the hydrogen atom the state of the electron is described by 3 eigenvalues one for each independent coordinate.  For the radial coordinate ρ the eigenvalue is called n where n can be any integer 1,2,3 …4. For the angular coordinate θ the quantum number is designated by the integer l which must be less than or equal to n – 1 or in symbols l # n – 1.  For the φ coordinate the eigenvalue number is designated by m which is subject to the constraint -l # m # +l

 5) There is yet one more phantom degree of freedom the electron can have and that is its spin which is designated by the symbol σ and can take on only the values ±1/2.  The electron spin is a wholly unexpected and mysterious degree of freedom that has to exist since it accounts for the magnetic moment of the electron.

 Figure: Approximate energy level diagram for atomic matter

6) The final piece of the puzzle which completes the quantum

description is the Pauli exclusion principle which simply states that no two electrons within an atom can have exactly the same quantum numbers.  Thus each electron must be described by different values of the numbers n,l,m and s.

 The above half dozen results that arise out of solving Schrödingers equation for the hydrogen atom form the basis of the energy level diagram depicted in figure (1).  Thus the energy of the electron which depends on all the degrees of freedom, but primarily on n, must also be quantized and therefore must be depicted by sharply defined discreet levels as shown in the figure.

 This is really a rather wondrous result as it lets us piece together atomic structure and the periodic table from the simple hydrogen atom up to very complex multi electron structures.  Lets cobble together the first few atoms to see how it works:

 1) HYDROGEN: With only one electron which occupies the lowest energy level traditionally designated by the label s which stands for angular momentum l=0.  The x axis labels in figure (1) correspond to the θ quantum number l and again by tradition is given the labels s, p, d, f … corresponding to the l values 0, 1, 2, 3, … respectively.  So for the hydrogen electron in its lowest energy state corresponding to n = 1 it can only have l = 0 according to rule 5 above and thus m must also be 0 according to the same rule.  The spin quantum number σ can be either ±1/2. Thus from the point of view of quantum theory the hydrogen electron is described by the following quadruplet of magic numbers (n,l,m,σ) = (1,0,0,1/2) where in the absence of any external magnetic field the spin quantum number σ could be either + or – ½.

 2) HELIUM: As the process continues to more complex atoms with more electrons we simply fill the levels shown in figure (1) one by one always filling the lowest energy levels first so that at equilibrium the atom has the lowest possible energy.  Helium has a nucleus with 2 protons and thus accommodates two electrons to achieve electrical neutrality and the lowest possible energy configuration has both electrons in the n = 1 quantum level with l = m = 0.  Now the Pauli exclusion principle comes into effect and dictates that one electron will have spin value +1/2 and the other -1/2.  By rule 6 above the n = 1 quantum level is now filled with one electron in the (1,0,0,+½) state and the other in the (1,0,0,-½) state.   Once an n level has the maximum number of electrons allowed by the above rules it is said to be filled and atoms with a filled n level tend to be very stable.  Thus helium is at a filled level and is predicted to be very stable as is well attested by experiment.

Lacombe2

Figure: XPS spectrum of a clean silicon wafer

 LITHIUM: Adding one more electron brings us to the element lithium with three electrons.  The first two electrons go into the n = 1 state with l = 0 and σ = ±1/2. Since the n = 1 state is now filled the third electron has no choice but to move into the n = 2 level with l = 0 and σ = ½ which is the 2s level shown in figure(1).

 HIGHER ATOMS: The process continues with the next element beryllium having 4 electrons completely filling the 1s and 2s levels.  The next atoms consisting of boron through neon must move into the next higher angular momentum state with l = 1 and designated as the 2p level in figure(1).  Since l = 1 now allows the m quantum number to take on the values -1, 0, +1 this level can hold up to 6 more electrons since each m level can accommodate 2 electrons by Pauli’s exclusion principle.  So the next 6

electrons go into the 2p level in the diagram and account for boron through neon.  At neon the 2p level is completely filled and neon is thus predicted to be a chemically stable molecule as is also born out in experiment.  The process continues progressively filling the higher energy levels shown in figure(1) and terminates finally at uranium with 92 electrons and thus 92 protons in the nucleus.  At this level of electric charge the electromagnetic forces in the nucleus reach a par with the normally much stronger nuclear forces and the nucleus itself becomes unstable and prone to fission.  Thus uranium is the largest atom found naturally in the environment though even larger atoms have been produced in large accelerators they have very short lifetimes.

 The above picture immediately implies that the electronic configurations of the different atoms provide unique tags for identifying each one since each configuration comes with a unique binding energy which is simply the energy required to remove an electron from a specific energy level into the void.  Consider for example a fluorine atom sitting on a surface.  The 1s electrons in fluorine have a binding energy of 17.4 ev (electron  volts) so if we see electrons at this energy being emitted from the surface we know that fluorine must be present. This is the basic concept underlying the XPS experiment.  The XPS spectrometer basically bombards the surface with X-rays which have sufficient energy to blast electrons bound to the various surface atoms out of their energy levels and then sorts the emitted electrons by their unique binding energies thus revealing which atoms are present at the surface.  Also the XPS experiment samples only the very top layers of the sample since once an electron has been emitted from its energy level it cannot travel very far through the surrounding material since its electric charge causes it to interact strongly with all the surrounding atoms which can capture or deflect it back into the substrate.  A free electron can typically go no more that about 100 angstroms through the material before being captured or deflected so in effect the electons which are ejected come from the top 100 angstroms of the surface effectively making XPS a surface sensitive technique. 

Figure(2) shows an XPS spectrum of a nominally clean silicon wafer which as manufactured consists of 100% pure single crystal silicon.  The XPS spectrometer, however, reveals a surprisingly different picture of the surface of the wafer.  Those in the microelectronics industry know that an initially clean silicon surface will react fairly rapidly with ambient oxygen to form a layer of silicon dioxide SiO2.  After a day or so sitting out in a clean room the initially pure silicon surface gathers a layer of SiO2 up to about 1000 angstrons thick which is way too thin to be detected by the human eye but is readily revealed in the XPS spectrum in figure(2).  The large central peak in the figure comes from electrons emitted from the oxygen 1s level in the SiO2.  The two small peaks to the far right come from the silicon 2s and 2p levels respectively.  This is not all, however, as the peak just to the right of the central oxygen 1s peak comes from the carbon 1s level and reveals that a contamination layer of a carbon containing molecule is also present.  This carbon contaminant most likely comes from some hydrocarbon or another which are present in nearly every ambient atmosphere.  Finally, the very tiny peak to the left of the central oxygen line reveals that a minute amount of fluorine is present as this peak represents electrons being emitted from the fluorine 1s level.  It is quite unusual for fluorine to be present on a clean silicon wafer and the contaminant level revealed in figure(2) entered through a rather surreptitious mechanism and also caused some rather unexpected problems with the adhesion behavior of this surface.

 Figure(2) represents a clear example of the invisible nature of surfaces by revealing three properties of a supposedly clean silicon wafer that would have been entirely invisible to the naked eye.

 1Geiger, Hans; Marsden, Ernest. “The Laws of Deflexion of α Particles through Large Angles”. Philosophical Magazine. Series 6 25 (148): 604-623(1913)

Recommend
5. December 2014   7:59 pm
Andy Stecher

Andy Stecher
Elgin, IL

PIA_Header720x320

We’re pleased to let you know that Plasmatreat has been selected to present at a brand-new, groundbreaking conference hosted by Plastics News. The conference, Plastics In Automotive: Building Tomorrow’s Car, will take place in Detroit, MI, from January 13-14, 2015.

As more automakers turn to new material choices to reduce the weight of their vehicles and to maximize fuel efficiency, innovations in plastics are driving the development of new vehicles that will spur further growth. Meanwhile, such issues as CAFE standards and the use of composite structures provide grounds for further discussion.

The conference will be exploring many of these important topics. I am delighted to be speaking, on behalf of Plasmatreat, on “Automotive Surfaces and the Prospects for Plasma Coatings.” Plasmatreat offers enabling technology for automotive lightweighting.

The conference will coincide with Industry Preview at the North American International Auto Show and include numerous presentations and panels featuring leading OEMS, Tier One suppliers and other experts on the development of automotive plastics. Attendees will also receive a ticket to the world’s largest auto show, where they can witness the latest in innovation and new vehicles.

As an added bonus, attendees have an exclusive opportunity to register to attend the Automotive News World Congress networking dinners, featuring keynote addresses from Mary Barra, CEO of General Motors, and Sergio Marchionne, Chairman & CEO of Chrysler Group LLC and CEO of Fiat S.p.A.

We’d love to see you there! Click here for more information about the conference.

Recommend
Khoren Sahagian

Khoren Sahagian
Materials Scientist

Editorial July 2014

Plasma treatments are a permanent and covalent substrate modification.  However many references note diminishing effects of plasma treatments with time.  One generalized conclusion is that the plasma modification is a temporary effect.  This conclusion is not inherently accurate or applicable to all plasma and material systems.  In truth there are many factors that govern the success and longevity of a plasma modification.  Research in plasma lacks harmonization in equipment, setup/configuration, and material selection.  These are key variables in a plasma modification.  Results from one method may not necessarily translate well to another experimental setup or class of material.  For this reason some engineering reviews of gas plasma do more to confound than to elucidate the scientific dialogue within industry.

 

Equipment design is of particular relevance in plasma industry.  This includes but is not limited to the electrode configuration, matching, RF frequency, and equipment geometry.  Many apparatus used in academia boast custom fabricated equipment or custom modification to existing tools.  Their equipment exemplifies engineering capabilities.  In my opinion the effectiveness of the equipment to a material system is specific and rarely generalizable to all materials or apparatus.

 

Plasma chemistry and substrate material should be matched correctly.  Some polymer systems may be either resistant or sensitive to specific plasma chemistry.  It is not enough to report gas, pressure, and power.   A complete characterization should understand the plasma stoichiometry and a hypothesis of the surface interaction.  Furthermore it must be accepted that many polymer systems are mobile, may swell with gas or moisture, or may undergo relaxation mechanisms.  Therefore be careful to consider pairing a material system with appropriate plasma source and plasma chemistry.

Recommend
7. July 2014   2:26 pm
Dr. K. L. Mittal, Dr. Robert H. Lacombe

Dr. K. L. Mittal, Dr. Robert H. Lacombe

We continue the INVISIBLE UNIVERSE blog after a delay due to our heavy involvement in the recently completed 9th INTERNATIONAL SYMPOSIUM ON CONTACT ANGLE, WETTABILITY AND ADHESION held at Lehigh University June 16-18, 2014.

Speaking of CONTACT ANGLE phenomena this would be a good time to review this topic as the contact angle method is by far and away the most popular surface analysis method in use today. This technique is of special interest to anyone doing surface modification either by plasma or any other method as it is the simplest and least expensive method for analyzing the impact of any surface treatment whatever. It is fortunate that our office has received a recently published volume on this topic which we will review shortly but first a rudimentary introduction to the concept of contact angle behavior of droplets for the sake of those who may be new to this topic.

Classic definition of the equilibrium contact angle of a drop of liquid on a surface as the balance of three surface tensions.

Fig.(1). Classic definition of the equilibrium contact angle of a drop of liquid on a surface as the balance of three surface tensions.

When a drop of some liquid is placed on a surface it will typically bead up and form a sessile drop as exhibited in Fig.(1). The angle that the edge of the drop makes with the underlying solid is determined by the balance of three surface tensions or surface energies if you prefer. The concept of surface energy has been covered in some detail in the February 2014 issue of this blog so it will not be reviewed here only to point out that surface tensions as measured in Nt/m are dimensionally the same as surface energies (J/m2) which are readily derived from surface tensions by multiplying the numerator and denominator of the units by m (meter) giving Nt-m/m2 or J(joules)/m2.

And now to our book review:

Wetting of Real Surfaces, By Edward Yu. Bormashenko, (Walter de Gruyter GmbH, Berlin/Boston, 2013)

In this rather compact monograph Prof. Bormashenko provides a rather comprehensive account of the theoretical developments in the field of contact angle and wettability of surfaces.

Interestingly, Prof. Bormashenko points out that the field of contact angle and wettability remained rather a backwater endeavor in the field of modern physics from the time of Thomas Young’s pioneering work up to roughly the 1990′s despite the fact that scientific heavyweights such as Einstein, Schrödinger and Bohr devoted a significant portion of their research activity to this topic. Much of this stems from the fact that surfaces presented a rather messy and intractable research topic due to the difficulty in obtaining well defined surfaces free of contamination and other defects. Indeed the eminent theoretical physicist Wolfgang Pauli remarked that “God created matter but the Devil created surfaces”. Thus the solid state physics literature up to about the early 1980′s tended to be dominated by topics such as superconductivity, electronic band structure, phase transitions, semiconductors and similar topics dealing primarily with the bulk behavior of solids.

This all started to change significantly by about the 1980′s being led in large part by the microelectronics industry which was fabricating multilevel thin film structures which were becoming more and more dominated by interfacial surfaces between metals, insulators and semiconductors.  Even by the early 1970′s it was becoming apparent that in order to fabricate devices with higher and higher circuit densities it was critical to understand the nature of the interactions between the various material components at their contact surfaces. This need was supported by advances in microscopy starting with electron microscopy and evolving further to electron tunneling microscopy and finally to the now ubiquitous atomic force microscopy. On top of this a number of surface analysis techniques emerged nearly too numerous to mention the most popular of which being X-ray Photoelectron Spectroscopy (XPS also called ESCA Electron Spectroscopy for Chemical Analysis).

The need for understanding surface properties was of course not limited to the microelectronics industry. The entire coatings industry needed to understand the wetting properties of various paints and inks and the biotechnology industry dealing with medical implants needed to understand how the surfaces of their devices would interact in the in vivo environment. The contact angle technique thus started to emerge as a low cost and highly sensitive method for exploring the wetting behavior of surfaces.

A critical juncture of sorts was achieved with the work of Barthlott and Neinhuis in 1997[1] who first studied the extreme hydrophobicity of the lotus leaf and its effect in removing all manner detritus from the leaf’s surface. This work lead to a literal explosion of work on the superhydrophobic effect and a variety of applications to self cleaning surfaces and other highly innovative technologies.

Getting back to Prof. Bormashenko’s volume a brief look at the table of contents reveals a rather wide range of topics:

  1. What is surface tension
  2. Wetting of ideal surfaces
  3. Contact angle hysteresis
  4. Dynamics of wetting
  5. Wetting of rough and chemically heterogeneous surfaces: the Wenzel and Cassie models
  6. Superhydrophobicity, superhydrophilicity and the rose petal effect
  7. Wetting transitions on rough surfaces
  8. Electrowetting and wetting in the presence of external fields
  9. Nonstick droplets

There is clearly not enough space here to cover all of the above topics in any detail so we will focus on chapter 6 dealing with super hydrophobic and hydrophilic phenomena which, as Prof. Bormashenko points out, are among the currently most actively researched topics in the contact angle field.

Schematic illustrating the difference between a truly superhydrophobic surface and one exhibiting only a high contact angle

Fig.(2). Schematic illustrating the difference between a truly superhydrophobic surface and one exhibiting only a high contact angle

Interestingly, the author points out that exhibiting a high contact angle is not sufficient to define a state of superhydrophobicity as might casually be assumed. In addition the contacting water drop must also exhibit low contact angle hysterisis.  That is the advancing and receding contact angles must be approximately the same. This property is required for the so called “lotus leaf” effect where water drops not only form with a high contact angle but also roll very easily off the leaf carrying any collected debris with them as shown in Fig.(2).

The counter example is the “rose petal” effect reported on by Jiang and co-workers.[2]  These investigators looked at water droplets on rose petals which also form very high contact angles but unlike the lotus leaf case these drops also exhibit a very strong hysterisis. An immediate consequence is that these drops do not roll even when held at a steep angle as also shown in Fig.(2).

Example of a hierarchical relief morphology.

Fig.(3). Example of a hierarchical relief morphology.

 A further subtle point brought out in the chapter is the fact that the substrate material does not have to be highly hydrophobic in order to exhibit the superhydrophobic effect. The lotus leaf material is in fact hydrophilic. What gives rise to the superhydrophobic behavior is the hierarchical relief morphology of the surface. An example of such a structure would be the fractal Koch curve shown schematically in Fig.(3). The author goes on to analyze the wetting of these highly variegated surfaces in terms of the Wenzel and Cassie models covered in chapter 5.

In all this volume can be highly recommended to anyone interested in coming up to date on the latest theoretical developments in the rapidly expanding field of contact angle phenomena.

The author invites any inquiries or comments on this article.


 [1] “Purity of the Sacred Lotus or Escape from Contamination in Biological Surfaces”, W. Barthlott and C. Neinhuis, Planta, 202, 1 (1997).<
[2] “Petal effect: A superhydrophobic state with high adhesive force”, L. Feng, Y. Zhang, J. Xi, Y. Zhu, N. Wang, F. Xia and L. Jiang, Langmuir, 24, 4114 (2008).
Recommend
Mikki Larner

Mikki Larner
Vice President Sales & Marketing
Belmont, CA

Editorial May 2014

I sell gas plasma technology.

This can be confusing, as there are several types of plasmas, both naturally occurring (such as the Northern Lights, lightning, and stars) and human-made (such as those used in neon signs, fluorescent lights, and plasma televisions).

From the examples above, it’s clear that plasma generates both light and energy. Plasma can also be used to modify – or, more specifically, molecularly re-engineer – other materials.

My company sells plasma technologies and processes for modifying a myriad of materials. Typically, the application is a surface cleaning and activation – either to prepare plastic or metal for a subsequent coating or bonding step — or thin film coatings that may be used to change the barrier or coefficient properties of a surface.

There are many different ways to manufacture human-made plasma.
We use primarily atmospheric and low-pressure plasma technologies in our work. There are a number of benefits to the low-pressure approach:

1.   For starters, the working environment is a primary plasma. In a primary plasma, there is a greater mean free path of the particles before a collision.

This sustained energy is ideal for modifying the interstices of porous media (such as a non-woven or sintered polymer), or for use inside complex nano-scale vias or channels. With atmospheric processes, on the other hand, the mean free path is very short, so the treatment area is limited.

2.   Low-pressure plasma offers chemistry versatility. Many different gases and vapors can be used, safely and economically. Low-pressure plasma is often used as a replacement technology for wet chemistry processes, providing greater control, lower costs, and lower risk of workplace exposures that could lead to accident or injury.

Additionally, unlike many wet chemistry processes, rinsing and curing is not required with a low-pressure process. This means a much shorter processing time, minutes versus hours in some cases.

In atmospheric processes, use of these chemistries may be dangerous and quantities required to generate the plasma may not be economical. This is one of the reasons that our Openair® technology uses just air. It’s incredibly cheap, readily available, and great for many industrial high-speed surface preparation processes.

3.   When using a low-pressure technique, multiple steps may be run in a single process. A part may be exposed to a cleaning gas chemistry (to remove contaminants from a surface) as well as an activation or coating process in a single run. It is not unusual for a single plasma process to replace two to three manual steps, eliminating overhead costs associated with transporting product, labor and materials.

4.   Another advantage is that the low-pressure process provides an extremely controlled environment. The process is conducted in a vacuum chamber with exacting control of gas flow, time, and power. Variations in the day-to-day environment are removed, and the precise process is readily reproducible. Additionally, cleanliness is assured, whether the process is practiced in a clean room or on an industrial manufacturing floor.

5.   Low-pressure technology allows for permanent, stable results. This means that a large batch of parts may be treated and stored prior to use. Or, alternately, parts may be shipped to other manufacturing sites for final assembly.

6.   Our low-temperature process enables treatment of thermally sensitive materials, and the process is free from electrical potential. Therefore, conductive materials may be safely modified.

7.   The total cost of consumables, including energy, gases/liquids, and maintenance parts, is typically less than $5 per hour. Furthermore, there are no additional costs for hazardous waste disposal, as none is created.

8.   The technology offers high-batch throughput:

•   Line speeds, in our standard R2R equipment, are up to 100 fpm with again, no time require for curing or drying steps.
•   Cycle times, during batch processing, range from 60 seconds to 20 minutes. Because a single batch may include hundreds or even thousands of parts, this means that each individual part is treated in mere fractions of a second.

Our job is to accurately evaluate your application and select the most appropriate technology solution for your production goals, be it low-pressure, corona, flame, or Openair. In rare cases, a simple IPA wipe may be all that’s needed to solve your adhesion problems!

Thanks for reading. If you have any questions, I welcome your calls and emails.

Recommend
Khoren Sahagian

Khoren Sahagian
Materials Scientist

Editorial May 2014

The current generation of consumers will eventually become displaced by the millennials.  Product development and marketing experts will be learning to cope with the new idiosyncrasies of ‘Generation Y’.  Firstly, many of these individuals do not form strong allegiances to brands.  Second, emotional connections appear to have the greatest dominance over consumer selection.  And finally, there is greater importance in first discovery.  The arising rules are reminiscent of a Japanese candy bar shelf; no consecutive month will display the same colors, cartoons, or shapes.  So what are some potential implications to the consumer manufacturing space?  Some trends are already becoming clear.

GenY Automation: versatile robotic platforms continue to be integral in production implementation.  Some of the new mechanization is more mobile, easily programmable, rapidly deployed, and cross disciplined in many different categories of operation.

GenY Materials: Whether olefin or bio-based, the custom polymer formulation could lose attractiveness.  Engineers could abandon the new design of materials with chemistry for more accessible technologies that alleviate material constraints with processing methods.

GenY Fabrication:  3D printing promises new and extremely custom fabrication that are uninhibited by classical machine tools or setup.  DIY design will empower individuals to create niche products for myriad markets.  And after this revolution 4D printing envisions the self-assembly of structures likened to proteins inside living bodies.

GenY Environment: Future consumers assert a greater demand for re-usability and a low environmental impact.  There is a less tolerance for waste in an ever shrinking planet with finite resource.

The manufacturing plants that are best adapted to the changing landscape will claim the lion’s share of consumer purchase.  The challenges will be non-trivial.  On the inside consumer products will require simple molecules that are biodegradable, easily formed, and bond-able.  And on the outside these products may take on radical forms, become regional fads, and short life-cycle.

Recommend
3. April 2014   12:39 pm
Dr. K. L. Mittal, Dr. Robert H. Lacombe

Dr. K. L. Mittal, Dr. Robert H. Lacombe

Is Nylon Hydrophobic, Hydrophilic or Maybe Both?

In a recent posting on LINKEDIN Scott Sabreen Owner-President, The Sabreen Group Inc. Initiated the following discussion:

 

Nylons are inherently difficult to bond because they are hydrophobic, chemically inert and possess poor surface wetting. …

Nylons are hygroscopic and will absorb moisture in excess of 3 percent of its mass of water from the atmosphere. Moisture, in and of itself, creates adhesion problems.  …

 

Hold on, on the face of it the above remarks would seem to be mutually contradictory.  Is nylon hydophobic or hydrophilic?

The resolution of this apparent paradox comes in recognizing that the hydrophobic behavior of nylon is a surface property and the hydrophilic behavior is a bulk property.

Since nylon is an organic polymer it has a relatively low surface energy as do most polymers.  This is a consequence of the surface chemistry and surface physics of polymers and other organics as discussed in the previous edition of this blog.

However the amide groups in the nylon chain attract water and they give rise to the hydrophilic behavior of this material in regard to BULK ABSORPTION of water.  A number of other polymers such as the polyimides also behave in a similar manner.

So in the bulk nylon can behave as a hydrophilic material but on the surface it can exhibit hydrophobic behavior. Just another hidden property of surfaces that make them both tricky and fascinating to study.

The author invites any inquiries or comments.

Recommend
3. March 2014   12:35 pm
Dr. K. L. Mittal, Dr. Robert H. Lacombe

Dr. K. L. Mittal, Dr. Robert H. Lacombe

Origin of Surface Energy

In the December 2013 issue of this blog we noted that nearly all the information we commonly acquire about any given surface comes from the radiation reflected from it.  We also noted that of the entire range of the electromagnetic spectrum available that a surface could possibly radiate, we detect only the so called visible spectrum which amounts to barely 2% of what could be emitted. Thus what our eyes alone tell us about what is happening on a given surface is very limited indeed.  Not only that but the type of information is limited to basically the bulk geometry, gross surface morphology and color.

The fact of the matter is that an entire universe of important physical properties remain invisible to our eyes.  The invisible  property we want to explore here in fact cannot be seen even in principle.  This property is what is called the surface energy and to get a picture of it we need the apparatus of thermodynamics.

The whole concept of energy is rather subtle and intricate in general.  In particular it can take many forms including:

  • Electromagnetic energy stored in electric fields
  • Magnetic energy stored in magnetic fields
  • Thermal energy stored in any material at a finite temperature
  • Potential energy of any mass in a gravitational field
  • Kinetic energy of any moving object
  • Relativistic energy of any massive object as given by Einstein’s famous formula E = mc2
  • etc.

However, for our purposes we only need to understand the elastic energy stored in common solids and we can approach this by considering the behavior of a common spring.  Stretch or compress a spring and it will store a certain amount of elastic energy which can be perceived by allowing the spring to return to its equilibrium length. Much the same type of behavior goes on in common solids.  Figure (1) gives a highly idealized but reasonably realistic picture of a solid material viewed at the atomic level.

Schematic diagram of a solid viewed at the atomic level

Fig. 1: Schematic diagram of a solid viewed at the atomic level. To a first approximation the atoms can be thought of as being held together by microscopic springs which account for the elastic properties of the material.

The atoms/molecules in a given solid are held together by atomic and intermolecular forces which arise from the rather complex electromagnetic interactions among the electrons and nuclei which make up the bulk of any material. Fortunately, near equilibrium and for small deformations these interactions behave in a linear fashion very much like the behavior of simple springs.  Thus as Robert Hook pointed out more than a century ago the restoring force tending to bind the atoms together increases in a linear fashion as they tend to separate from one another. Things get quite a bit more complicated at large deformations but that need not concern us here.

Referring to the upper diagram in figure (1) we see that a typical atom in the bulk of our hypothetical solid feels either tensile or compressive loads from all directions and much the same is experienced by all the rest of the atoms in the deep interior of the solid.  However, the situation is quite a bit different for those atoms at or near the surface as shown in the bottom diagram of figure (1).  Looking down into the bulk of the material they see that same forces as the bulk atoms do but now there is no material on top which creates a highly asymmetrical situation.  It is precisely this asymmetry that gives rise to the unique surface tension or surface energy of the solid.

 

A Word on Units

Perhaps one of the most confusing things about surface energies are the units they are expressed in so we take a quick break here to clear up this issue.  Going back to our spring, if we stretch it there arises an immediate force tending to return it to the un stretched length.  Current international convention expresses this force in the standard SI units[1] of newtons. All systems of units are essentially arbitrary but it is nonetheless important to settle on a common standard.  Thus the common SI units for force are the newton with the dyne and the pound also in use but not considered standard by the international community.  The newton is the canonical unit of the international scientific community and the dyne is a scaled down derivative.  The pound is an archaic holdover from the past but is still much in use in commercial transactions especially in the USA.

Focusing on the newton it is formally defined as the force required to accelerate a one kilogram weight resting on a friction free surface one meter per second per second.  That means that the weight increases its speed by one meter per second every second under the action of the applied 1 newton load.  We can also thing of the newton in more intuitive if less rigorous terms as the weight of a common apple at sea level.  Thus if you hold a standard sized apple it imparts a force of close to one newton on your hand.  In terms of pounds the apple weighs slightly more that 1/5 th of a pound and in dynes it weighs about 100,000 dynes.

With the concept of force now rigorously defined we move on to the concept of energy again using our standard apple as a prop. Force times distance is energy in the form or work. Let’s assume that the apple weighs exactly 1 newton.  If we raise the apple from the ground to a height of 1 meter we will have done 1 joules worth of work or putting it a little differently we will have increased the apple’s gravitational energy by one joule.

Getting back to our spring, it stores what is called elastic energy.  As stated above energy is force times distance and the restoring force of a spring is proportional to the extension so the energy stored in an extended spring is proportional to force times distance squared.  These ideas can all be compactly summarized in the following formulas:

F = -k x (Restoring force exerted by a stretched spring) (1)

Where:
F = Force in newtons
x = Displacement in meters
k = Spring constant in newtons/meter

The minus sign in Eq(1) indicates the force is always a restoring force tending to oppose any extension or compression.

The energy stored in the spring is the integral of Eq(1) from 0 to some extension d:

W = IFxdx = -(½)kd2 (Energy stored in stretched spring) (2)

Thus if our spring has a spring constant of 1 newton/meter and we extend it to a distance of 1 meter it will pull back with a force of 1 newton.  Also it will store an energy of ½ joule.

We can think of the spring as having a tension of 1 newton/meter which is just another name for the spring constant.  Turning now to the physical surface of a polymer such as nylon the springs binding the atomic units together would have a tension of about 8×10-25 newtons/meter and this is the basis of the so called surface tension or surface energy of this material.  Now in practice we do not deal with such impossibly small numbers so we need to scale things up a bit.  In the case of our nylon, one square meter of the surface will contain something like 5×1022 molecular bonds among the surface moieties or in terms of our simple spring model 5×1022 springs.  So we take the surface tension of our polymer to be the tension in a single bond times the total number of bonds in a square meter which for our nylon material comes to 40×10-3 newtons/meter.  This is still to awkward so we introduce the milli newton (abbr mN) which is 10-3 newtons so nylon now has a surface tension of 40mN/m (abbreviating the meter as m).  Well why stop here.  We can do a little algebra on the units and say that 40mN/m is the same as 40mN-m/m2 by multiplying numerator and denominator by m.  Now the mN-m we recognize as a mJ or milli joule and so our nylon can be thought of as having a surface tension (aka surface energy) of 40mJ/m2.  And it does not stop here.  Many folks dealing with surface tension measurements would rather not have to deal with the milli prefix and such huge surface areas as a square meter.  It is more convenient to scale down the force unit to dynes (10-5 newtons) and use square centimeters (abbr cm) instead of square meters.  Thus 40mN/m scales down to 40 dynes/cm.

 

Real Solid Surfaces

Now that we have pinned down suitable units for measuring surface energies we can have a closer look at real material surfaces.  The diagrams in figure 1 give a much better depiction of a liquid surface than they do for a solid.  Liquids have a high mobility and can always adjust their configuration to give a uniform surface of minimum surface tension.  Not so with solids.  The surface configuration of a solid depends sensitively on the thermal-mechanical loading conditions under which it was created.  Was the material cooled rapidly or slowly?  What type of loads if any were active during the cooling process?

More realistic depiction of an actual solid surface

Fig. 2: More realistic depiction of an actual solid surface

Figure 2 gives a more realistic depiction of what a typical solid surface looks like.  This figure depicts 4 typical surface flaws that can significantly alter the surface energy of any real solid:

  1. VOIDS: Materials that have been rapidly quenched may not have time to completely condense giving rise to voids which are a source of tensile stress that will alter the surface energy in their vicinity.
  2. INCLUSIONS: No material is 100% pure and contamination species have a strong tendency to migrate toward surfaces where they upset the normal packing and in many cases give rise to a local compressive stress.
  3. GRAIN BOUNDARIES/DISLOCATIONS: Nearly all crystalline and semi crystalline materials are polycrystalline in nature.  That is they are made up of an aggregate of a large number of small crystals all packed together in no particular order.  The boundary where two crystallites meet form what is called a grain boundary.  Further the misalignment of planes within the crystalline give rise to what are called dislocations.  These and other imperfections can give rise to local stress fields where they intersect a surface that again alter the local surface energy.
  4. CONTAMINATION: Of all the surface imperfections contamination layers have the profoundest effect on surface energies.  Real material objects sit around on benches in the lab or other platforms where they are subject to constant bombardment from all the contaminants and gases in a typical atmosphere not the mention the greasy fingers of human handlers.

Needless to say all of these considerations make the accurate measurement of the surface energies of solids a rather tricky business. But enough for now.  We take up this question in the next chapter.

The author invites any inquiries or comments.

 

 


            [1]The International System of Units (abbreviated SI from French: Le Système international d’unités). A bunch of folks got together and formed the General Conference on Weights and Measures, an organization set up by the Convention of the Metre in 1875, which succeeded in bringing together many international organizations to agree not only the definitions of the SI, but also rules on writing and presenting measurements in a standardized manner around the globe.

 

 

Recommend
17. February 2014   8:30 am
Dr. K. L. Mittal, Dr. Robert H. Lacombe

Dr. K. L. Mittal, Dr. Robert H. Lacombe

We interrupt the normal flow of this blog on “SURFACES: THE INVISIBLE UNIVERSE”  to present a book review on a volume which should be of keen interest to all working in the field of atmospheric  plasma technology.  The volume in question is:

ATMOSPHERIC PRESSURE PLASMA TREATMENT OF POLYMERS: RELEVANCE TO ADHESION;  Eds. Michael Thomas and K. L. Mittal (WILEY-Scrivener Publishing, 2013)

The volume contains 15 review articles ranging from surface modification with plasma printing  to the deposition of nanosilica coatings on plasma activated polyethylene.  Each article has been produced by leading experts in the respective topics and gives a truly authoritative examination of the subject matter under review.  A quick look at the table of contents reveals the remarkable scope of this volume.

PART 1: FUNDAMENTAL ASPECTS

  1. Combinatorial Plasma-based Surface Modification by Means of Plasma Printing with Gas-carrying Plasma Stamps at Ambient Pressure;  Alena Hinze, Andrew Marchesseault, Stephanus Büttgenbach, Michael Thomas and Claus-Peter Klages
  2. Treatment of Polymer Surfaces with Surface Dielectric Barrier Discharge Plasmas; Marcel Šimor and Yves Creyghton
  3. Selective Surface Modification of Polymeric Materials by Atmospheric-Pressure Plasmas: Selective Substitution Reactions on Polymer Surfaces by Different Plasmas; Norihiro Inagaki
  4. Permanence of Functional Groups at Polyolefin Surfaces Introduced by Dielectric Barrier Discharge Pretreatment in Presence of Aerosols;  R. Mix, J. F. Friedrich and N. Inagaki
  5. Achieving Nano-scale Surface Structure on Wool Fabric by Atmospheric Pressure Plasma Treatment;  C. W. Kan, W. Y. I Tsoi, C. W. M. Yuen, T. M. Choi and T. B. Tang
  6. Deposition of Nanosilica Coatings on Plasma Activated Polyethylene Films;  D. D.  Pappas, A. A. Bujanda, J. A. Orlicki, J. D. Demaree, J. K. Hirvonen, R. E. Jensen and S. H. McKnight
  7. Atmospheric Plasma Treatment of Polymers for Biomedical Applications;  N. Gomathi, A. K. Chanda and S. Neogi

PART 2: ADHESION ENHANCEMENT

  1. Atmospheric Pressure Plasma Polymerization Surface Treatments by Dielectric Barrier Discharge for Enhanced Polymer-polymer and Metal-polymer Adhesion;  Maryline Moreno-Couranjou, Nicolas D. Boscher, David Duday, Rémy Maurau, Elodie Lecoq and Patrick Choquet
  2. Adhesion Improvement by Nitrogen Functionalization of Polymers Using DBD-based Plasma Sources at Ambient Pressure;  Michael Thomas, Marko Eichler, Kristina Lachmann, Jochen Borris, Alena Hinze and Klaus-Peter Klages
  3. Adhesion Improvement of Polypropylene through Aerosol Assisted Plasma Deposition at Atmospheric Pressure;  Marorie Dubreuil, Erik Bongaers and Dirk Vangeneugden
  4. The Effect of Helium-Air, Helium-Water, Helium-Oxygen and Helium-Nitrogen Atmospheric Pressure Plasmas on the Adhesion Strength of Polyethylene;  Victor Rodriguez-Santiago, Andres A. Bujanda, Kenneth E. Strawhecker and Daphne D. Pappas
  5. Atmospheric Plasma Surface Treatment of Styrene-Butadiene Rubber: Study of Adhesion Ageing Effects;  Cátia A. Carreira, Ricardo M. Silva, Vera V. Pinto, Maria José Ferreira, Fernando Sousa, Fernando Silva and Carlos M. Pereira
  6. Atmospheric Plasma Treatment in Extrusion Coating:  Part 1 Surface Wetting and LDPE Adhesion to Paper;  Mikko Tuominen, J. Lavonen, H. Teisala, M. Stepien and J. Kuusipalo
  7. Atmospheric Plasma Treatment in Extrusion Coating:  Part 2 Surface Modification of LDPE and PP Coated Papers;  Mikko Tuominen, J. Lavonen, J. Lahti and J. Kuusipalo
  8. Achieving Enhanced Fracture Toughness of Adhesively Bonded Cured Composite Joint Systems Using Atmospheric Pressure Plasma Treatments;  Amsarani Ramamoorthy, Joseph Mohan, Greg Byrne, Neal Murphy, Alojz  Ivankoviv and Denis P. Dowling
Schematic two dimensional slice through a typical plasma stamp

Schematic two dimensional slice through a typical plasma stamp

A cursory glance at the above list readily gives one the impression that the applications of the atmospheric plasma technique are limited solely by ones imagination.  It is also clear that this short review will be able to cover only a small fraction of the material covered in this volume.  Quite likely the most innovative paper in the collection is the one on “Combinatorial Plasma-based Surface Modification …” listed as number 1 above.  This work attempts to take the process of plasma surface modification to a higher level through the use of “plasma stamps” which can be used to pattern a substrate with varying levels of plasma treatment in a single run.  A schematic diagram of a plasma stamp is shown in figure (1).  The substrate to be treated is patterned with an array of chambers using poly(dimethylsiloxane) PDMS as an insulator layer.  The resulting array is sandwiched between a porus metal mesh and an electrode.  The metal mesh in this case serves a dual purpose as a gas carrier and as an electrode.

The authors site a number of advantages of the plasma stamp configuration including:

  • Due to the small size of the plasma chambers it is easy to supply nearly unlimited volumes of gas to the active micro-plasmas which is very useful when performing film depositions as opposed to simply performing a surface modification.
  • Again due to the small cavity size the stamp can be rapidly filled using  a small amount of gas.  Thus the process is not only economical in the use of gas but the small chambers can be rapidly purged of unwanted oxygen which is a critical requirement when performing plasma nitrogenation treatments.
  • The small cavity size also allows reaction products created in the cavities that are not deposited to be swept away efficiently in the gas stream.  This is very useful in preventing fouling due to the redeposition of plasma  polymers.
  • Quite likely the most significant advantage of the plasma stamp technology is the fact that quite large arrays  of the plasma micro-cavities  can be created allowing for very efficient combinatorial studies of different  plasma treatments on a single substrate in a single run.  Thus one can easily imagine a 2 dimensional array where two different gas streams are independently introduced to the array from opposite sides of the inlet edge.  The streams will combine continuously across the entire array of cavities giving a well defined gradient of gas composition over the entire array.  Different cavities thereby receive different treatments depending on their location in the overall array.  The results can then be inspected by any of a number of surface analysis methods such as Fourier Transform Infra Red spectroscopy (FTIR) or Xray Photoelectron Spectroscopy (XPS).  Thus a large number of different surface treatments can be screened in a highly efficient manner.

Again the large scope of the volume does not allow us to comment on the other equally interesting articles. It should be clear, however, as mentioned above that the possibilities are limited only by the imagination.

Recommend