Tag: Technology

  • Podcast Ep. #34 – Aerospace Startups and the ATI Boeing Accelerator

    Wil Benton is the Venture & Ecosystem Director for the ATI Boeing Accelerator in London, UK. The Aerospace Technology Institute (ATI) is a UK organisation that creates the technology strategy for the UK aerospace sector and funds world-class research and development. The ATI recently launched a startup accelerator to accelerate the growth of new companies in industry 4.0 and sustainable development, with the aim of bolstering the growth and competitiveness of the UK aerospace industry. The programme is designed to help startups establish commercial relationships with global aerospace companies, like Boeing and GKN Aerospace, and to raise follow-on funding and engage with the wider UK aerospace sector. The first cohort of companies was recently announced and and you can check out a video of the selection day below.

    Wil’s background is originally in the tech industry as a founder of Chew, a live streaming platform for DJ’s, as well as an angel investor and startup advisor for the startup accelerator Ignite. In this episode, Wil and I speak about:

    • his career background and route into the aerospace industry
    • the goal and operational principle of the ATI Boeing Accelerator
    • some of the aerospace startups in the first cohort of the accelerator
    • and Wil’s passion for entrepreneurship and STEM engagement


    Selected Links from the Episode

  • Podcast Ep. #32 – Air-to-Air Refuelling and Higher Education Engineering Reform with Dr Steve Bullock

    Dr Steve Bullock is an engineering researcher in air-to-air refuelling and cooperative control of UAVs, as well as the Programme Director of the Aerospace Engineering programme at the University of Bristol. As the programme director of a leading European aerospace engineering programme, Steve has a unique vantage point on how the higher education landscape is changing, and specifically, how technology trends such as aviation sustainability and digitisation are changing the requirements for an engineering university education in the 21st century.

    As a TeachFirst ambassador and presenter of the Cosmic Shed podcast, Steve has a clear passion for education in general and is actively exploring different ways of disseminating technical information to a broad audience. In this episode of the podcast Steve and I talk about,

    • his path into aerospace engineering and how he found his passion for teaching
    • his PhD work on air-to-air refuelling and cooperative control
    • what he considers to be some of the key challenges in engineering university education
    • how the Aerospace Engineering department in Bristol is planning for the future
    • and much, much more.


    Selected Links from the Episode

  • Podcast Ep. #31 – Virtual Reality in Aerospace with Mbryonic Founder Tom Szirtes

    Tom Szirtes is the founder and director of Mbryonic, a London-based digital design studio. Mbryonic specialises in creating virtual reality (VR), augmented reality (AR) and mixed reality (MR) experiences that help organisations communicate, educate and entertain more effectively. Apart from the traditional applications in gaming and education, VR is now increasingly important for industrial design and engineering in general. For example, Mbryonic recently partnered with All Nippon Airways to provide customers an immersive virtual tour of All Nippon’s new business class in the Boeing 777 cabin. Mbryonic has also partnered with Acumen to create ‘The Adient Ascent VR’; a modular aircraft seating system that allows airlines to configure their cabins through a touch screen interface and then experience what it’s actually like to be in the cabin through a VR headset. Apart from discussing these two projects, Tom and I talk about:

    • the fundamentals of and differences between virtual reality, augmented reality and mixed reality
    • some of the advantages of VR that will transform the aerospace business landscape
    • and how engineers can benefit from using the technology


    Selected Links from the Episode

  • The History of Rocket Science

    Rocket technology has evolved for more than 2000 years. Today’s rockets are a product of a long tradition of ingenuity and experimentation, and combine technical expertise from a wide array of engineering disciplines. Very few, if any, of humanity’s inventions are designed to withstand equally extreme conditions. Rockets are subjected to awesome g-forces at lift-off, and experience extreme hot spots in places where aerodynamic friction acts most strongly, and extreme cold due to liquid hydrogen/oxygen at cryogenic temperatures. Operating a rocket is a balance act, and the line between a successful launch and catastrophic blow-out is often razor thin. No other engineering system rivals the complexity and hierarchy of technologies that need to interface seamlessly to guarantee sustained operation. It is no coincidence that “rocket science” is the quintessential cliché to describe the mind-blowingly complicated.

    Fortunately for us, we live in a time where rocketry is undergoing another golden period. Commercial rocket companies like SpaceX and Blue Origin are breathing fresh air into an industry that has traditionally been dominated by government-funded space programs. But even the incumbent companies are not resting on their laurels, and are developing new powerful rockets for deep-space exploration and missions to Mars. Recent blockbuster movies such as Gravity, Interstellar and The Martian are an indication that space adventures are once again stirring the imagination of the public.

    What better time than now to look back at the past 2000 years of rocketry, investigate where past innovation has taken us and look ahead to what is on the horizon? It’s certainly impossible to cover all of the 51 influential rockets in the chart below but I will try my best to provide a broad brush stroke of the early beginnings in China to the Space Race and beyond.

    51 influential rockets ordered by height. Created by Tyler Skrabek
    51 influential rockets ordered by height. Created by Tyler Skrabek

    The history of rocketry can be loosely split into two eras. First, early pre-scientific tinkering and second, the post-Enlightenment scientific approach. The underlying principle of rocket propulsion has largely remained the same, whereas the detailed means of operation and our approach to developing rocketry has changed a great deal.

    The fundamental principle of rocket propulsion, spewing hot gases through a nozzle to induce motion in the opposite direction, is nicely illustrated by two historic examples. The Roman writer Aulus Gellius tells a story of Archytas, who, sometime around 400 BC, built a flying pigeon out of wood. The pigeon was held aloft by a jet of steam or compressed air escaping through a nozzle. Three centuries later, Hero of Alexandria invented the aeolipile based on the same principle of using escaping steam as a propulsive fluid. In the aeolipile, a hollow sphere was connected to a water bath via tubing, which also served as a primitive type of bearing, suspending the sphere in mid-air. A fire beneath the water basin created steam which was subsequently forced to flow into the sphere via the connected tubing. The only way for the gas to escape was through two L-shaped outlets pointing in opposite directions. The escaping steam induced a moment about the hinged support effectively rotating the sphere about its axis.

    In both these examples, the motion of the device is governed by the conservation of momentum. When the rocket and internal gases are moving as one unit, the overall momentum, the product of mass and velocity, is equal to P1P_1. Thus for a total mass of rocket and gas, m=mr+mgm=m_r+m_g, moving at velocity vv,

    mv=(mr+mg)v=P1mv = \left(m_r + m_g\right)v = P_1

    As the gases are expelled through the rear of the rocket, the overall momentum of the rocket and fuel has to remain constant as long as no external forces are acting on the system. Thus, if a very small amount of gas dm\mathrm{d}m is expelled at velocity vev_e relative to the rocket (either in the direction of vv or in the opposite direction), the overall momentum of the system is

    (mdm)(v+dvr)+dm(v+ve)=P2\left(m – \mathrm{d}m\right) \left(v+\mathrm{d}v_r\right) + \mathrm{d}m \left(v + v_e\right) = P_2

    As P2P_2 has to equal P1P_1 to conserve momentum

    mv=(mdm)(v+dvr)+dm(v+ve)mv = \left(m – \mathrm{d}m\right) \left(v+\mathrm{d}v_r\right) + \mathrm{d}m \left(v + v_e\right)

    and by isolating the change in rocket velocity dvr\mathrm{d}v_r

    (mdm)dvr=vedm\left(m-\mathrm{d}m\right) \mathrm{d}v_r = -v_e\mathrm{d}m
    dvr=dm(mdm)ve\therefore dv_r = -\frac{\mathrm{d}m}{\left(m-\mathrm{d}m\right)} v_e

    The negative sign in the equation above indicates that the rocket always changes velocity in the opposite direction of the expelled gas. Hence, if the gas is expelled in the opposite direction of the motion vv (i.e. vev_eis negative), then the change in the rocket velocity will be positive (i.e. it will accelerate).

    At any time tt the quantity M=mdmM = m-\mathrm{d}m is equal to the residual mass of the rocket (dry mass + propellant) and dm=dM\mathrm{d}m = \mathrm{d}M denotes it change. If we assume that the expelled velocity of the gas remains constant throughout, we can easily integrate the above expression to find the incremental change in velocity as the total rocket mass (dry mass + propellant) changes from an intial mass MoM_o to a final mass MfM_f. Hence,

    Δv=MoMfvedMM=velnM|MoMf=ve(lnMolnMf)=velnMoMf\Delta v = \int_{M_o}^{M_f} -v_e \frac{\mathrm{d}M}{M} = -v_e \ln M\left.\right|^{M_f}_{M_o} = v_e \left(\ln M_o – \ln M_f\right) = v_e \ln \frac{M_o}{M_f}

    This equation is known as the Tsiolkovsky rocket equation (more on him later) and is applicable to any body that accelerates by expelling part of its mass at a specific velocity.

    Often, we are more interested in the thrust created by the rocket and its associated acceleration ara_r. Hence, by dividing the equation for dvrdv_r by a small time increment dtdt

    ar=dvrdt=dMdtveM=M˙Mvea_r = \frac{\mathrm{d}v_r}{\mathrm{d}t} = – \frac{\mathrm{d}M}{\mathrm{d}t} \frac{v_e}{M} = \frac{\dot{M}}{M} v_e

    and the associated thrust FrF_r acting on the rocket is

    Fr=Mar=M˙veF_r = Ma_r = \dot{M} v_e

    where M˙\dot{M} is the mass flow rate of gas exiting the rocket. This simple equation captures the fundamental physics of rocket propulsion. A rocket creates thrust either by expelling more of its mass at a higher rate (M˙\dot{M}) or by increasing the velocity at which the mass is expelled. In the ideal case that’s it! (So by idealised we mean constant vev_e and no external forces, e.g. aerodynamic drag in the atmosphere or gravity. In actual calculations of the required propellant mass these forces and other efficiency reducing factors have to be included.)

    Graph of the Tsiolkovsky rocket equation
    Graph of the Tsiolkovsky rocket equation

    A plot of the rocket equation highlights one of the most pernicious conundrums of rocketry: The amount of fuel required (i.e. the mass ratio Mo/MfM_o/M_f) to accelerate the rocket through a velocity change Δv\Delta v at a fixed effective escape velocity vev_e increases exponentially as we increase the demand for greater Δv\Delta v. As the cost of a rocket is closely related to its mass, this explains why it is so expensive to propel anything of meaningful size into orbit (Δv\Delta v \approx 28,800 km/hr (18,000 mph) for low-earth orbit).

    The early beginnings


    Drawing of a Chinese rocket and launching mechanism

    The wood pigeon and aeolipile do not resemble anything that we would recognise as a rocket. In fact, the exact date when rockets first appeared is still unresolved. Records show that the Chinese developed gunpowder, a mixture of saltpetre, sulphur and charcoal dust, at around 100 AD. Gunpowder was used to create colourful sparks, smoke and explosive devices out of hollow bamboo sticks, closed off at one end, for religious festivals. Perhaps some of these bamboo tubes started shooting off or skittering along the ground, but the Chinese started tinkering with the gunpowder-filled bamboo sticks and attached them to arrows. Initially the arrows were launched in the traditional way using bows, creating a form of early incendiary bomb, but later the Chinese realised that the bamboo sticks could launch themselves just by the thrust produced by the escaping hot gases.

    The first documented use of such a “true” rocket was during the battle of Kai-Keng between the Chinese and Mongols in 1232. During this battle the Chinese managed to hold the Mongols at bay using a primitive form a solid-fueled rocket. A hollow tube was capped at one end, filled with gunpowder and then attached to a long stick. The ignition of the gunpowder increased the pressure inside the hollow tube and forced some of the hot gas and smoke out through the open end. As governed by the law of conservation of momentum, this creates thrust to propel the rocket in the direction of the capped end of the tube, with the long stick acting as a primitive guidance system, very much reminiscent of the firework “rockets” we use today.

    Wan Hu (the man in the moon?) and his rocket chair
    Wan Hu (the man in the moon?) and his rocket chair

    According to a Chinese legend, Wan Hu, a local official during the 16th century Ming dynasty, constructed a chair with 47 gunpowder bamboo rockets attached, and in some versions of the legend supposedly fitted kite wings as well. The rocket chair was launched by igniting all 47 bamboo rockets simultaneously, and apparently, after the commotion was over, Wan Hu was gone. Some say he made it into space, and is now the “Man in the Moon”. Most likely, Wan Hu suffered the first ever launch pad failure.

    One theory is that rockets were brought to Europe via the 13th century Mongol conquests. In England, Roger Bacon developed a more powerful gunpowder (75% saltpetre, 15% carbon and 10% sulfur) that increased the range of rockets, while Jean Froissart added a launch pad by launching rockets through tubes to improve aiming accuracy. By the Renaissance, the use of rockets for weaponry fell out of fashion and experimentation with fireworks increased instead. In the late 16th century, a German tinkerer, Johann Schmidlap, experimented with staged rockets, an idea that is the basis for all modern rockets. Schmidlap fitted a smaller second-stage rocket on top of a larger first-stage rocket, and once the first stage burned out, the second stage continued to propel the rocket to higher altitudes. At about the same time, Kazimierz Siemienowicz, a Polish-Lithuanian commander in the Polish Army published a manuscript that included a design for multi-stage rockets and delta-wing stabilisers that were intended to replace the long rods currently acting as stabilisers.

    The scientific method meets rocketry

    The scientific groundwork of rocketry was laid during the Enlightenment by none other than Sir Isaac Newton. His three laws of motion,

    1) In a particular reference frame, a body will stay in a state of constant velocity (moving or at rest) unless a net force is acting on the body
    2) The net force acting on a body causes an acceleration that is proportional to the body’s inertia (mass), i.e. F=maF=ma
    3) A force exerted by one body on another induces an equal an opposite reaction force on the first body

    are known to every student of basic physics. In fact, these three laws were probably intuitively understood by early rocket designers, but by formalising the principles, they were consciously being used as design guidelines. The first law explains why rockets move at all. Without creating propulsive thrust the rocket will remain stationary. The second quantifies the amount of thrust produced by a rocket at a specific instant in time, i.e. for a specific mass MM. (Note, Newton’s second law is only valid for constant mass systems and is therefore not equivalent to the conservation of momentum approach described above. When mass varies, an equation that explicitly accounts for the changing mass has to be used.) The third law explains that due to the expulsion of mass, in re-action a thrusting force is produced on rocket.

    In the 1720s, at around the time of Newton’s death, researchers in the Netherlands, Germany and Russia started to use Newton’s laws as tools in the design of rockets. The dutch professor Willem Gravesande built rocket-propelled cars by forcing steam through a nozzle. In Germany and Russia rocket designers started to experiment with larger rockets. These rockets were powerful enough that the hot exhaust flames burnt deep holes into the ground before launching. The British colonial wars of 1792 and 1799 saw the use of Indian rocket fire against the British army. Hyder Ali and his son Tipu Sultan, the rulers of the Kingdom of Mysore in India, developed the first iron-cased rockets in 1792 and then used it against the British in the Anglo-Mysore Wars.

    Casing the propellant in iron, which extended range and thrust, was more advanced technology than anything the British had seen until then, and inspired by this technology, the British Colonel William Congreve began to design his own rocket for the British forces. Congreve developed a new propellant mixture and fitted an iron tube with a conical nose to improve aerodynamics. Congreve’s rockets had an operational range of up to 5 km and were successfully used by the British in the Napoleonic Wars and launched from ships to attack Fort McHenry in the War of 1812. Congreve created both carbine ball-filled rockets to be used against land targets, and incendiary rockets to be used against ships. However, even Congreve’s rockets could not significantly improve on the main shortcomings of rockets: accuracy.

    A selection of Congreve rockets (Wikimedia Commons).
    A selection of Congreve rockets

    At the time, the effectiveness of rockets as a weapon was not their accuracy or explosive power, but rather the sheer number that could be fired simultaneously at the enemy. The Congreve rockets had managed some form of basic attitude control by attaching a long stick to the explosive, but the rockets had a tendency to veer sharply off course. In 1844, a British designer, William Hale developed spin stabilisation, now commonly used in gun barrels, which removed the need for the rocket stick. William Hale forced the escaping exhaust gases at the rear of the rocket to impinge on small vanes, causing the rocket to spin and stabilise (the same reason that a gyroscope remains upright when spun on a table top). The use of rockets in war soon took a back seat once again when the Prussian army developed the breech-loading cannon with exploding warheads that proved far superior than the best rockets.

    The era of modern rocketry

    Soon, new applications for rockets were being imagined. Jules Verne, always the visionary, put the dream of space flight into words in his science-fiction novel “De la Terre á la Lune” (From the Earth to the Moon), in which a projectile, named Columbiad, carrying three passengers is shot at the moon using a giant cannon. The Russian schoolteacher Konstantin Tsiolkovsky (of rocket equation fame) proposed the idea of using rockets as a vehicle for space exploration but acknowledged that the main bottlenecks of achieving such a feat would require significant developments in the range of rockets. Tsiolkovsky understood that the speed and range of rockets was limited by the exhaust velocity of the propellant gases. In a 1903 report, “Research into Interplanetary Space by Means of Rocket Power”, he suggested the use of liquid-propellants and formalised the rocket equation derived above, relating the rocket engine exhaust velocity to the change in velocity of the rocket itself (now known as the Tsiolkovsky rocket equation in his honour, although it had already been discovered previously).

    Tsiolkovsky also advocated the development of orbital space stations, solar energy and the colonisation of the Solar System. One of his quotes is particularly prescient considering Elon Musk’s plans to colonise Mars:

    “The Earth is the cradle of humanity, but one cannot live in the cradle forever” — In a letter written by Tsiolkovsky in 1911.

    The American scientist Robert H. Goddard, now known as the father of modern rocketry, was equally interested in extending the range of rockets, especially reaching higher altitudes than the gas balloons used at the time. In 1919 he published a short manuscript entitled “A Method of Reaching Extreme Altitudes” that summarised his mathematical analysis and practical experiments in designing high altitude rockets. Goddard proposed three ways of improving current solid-fuel technology. First, combustion should be contained to a small chamber such that the fuel container would be subjected to much lower pressure. Second, Goddard advocated the use of multi-stage rockets to extend their range, and third, he suggested the use of a supersonic de Laval nozzle to improve the exhaust speed of the hot gases.

    Goddard started to experiment with solid-fuel rockets, trying various different compounds and measuring the velocity of the exhaust gases. As a result of this work, Goddard was convinced of Tsiolkovsky’s early premonitions that a liquid-propellant would work better. The problem that Goddard faced was that liquid-propellant rockets were an entirely new field of research, no one had ever built one, and the system required was much more complex than for a solid-fuelled rocket. Such a rocket would need separate tanks and pumps for the fuel and oxidiser, a combustion chamber to combine and ignite the two, and a turbine to drive the pumps (much like the turbine in a jet engine drives the compressor at the front). Goddard also added a de Laval nozzle which cooled the hot exhaust gases into a hypersonic, highly directed jet, more than doubling the thrust and increasing engine efficiency from 2% to 64%! Despite these technical challenges, Goddard designed the first successful liquid-fuelled rocket, propelled by a combination of gasoline as fuel and liquid oxygen as oxidiser, and tested it on March 16, 1926. The rocket remained lit for 2.5 seconds and reached an altitude of 12.5 meters. Just like the first 40 yard flight of the Wright brothers in 1903, this feat seems unimpressive by today’s standards, but Goddard’s achievements put rocketry on an exponential growth curve that led to radical improvements over the next 40 years. Goddard himself continued to innovate; his rockets flew to higher and higher altitudes, he added a gyroscope system for flight control and introduced parachute recovery systems.

    On the other side of the Atlantic, German scientists were beginning to play a major role in the development of rockets. Inspired by Hermann Oberth’s ideas on rocket travel, the mathematics of spaceflight and the practical design of rockets published in his book “Die Rakete zu den Planetenraumen” (The Rocket to Space), a number of rocket societies and research institutes were founded in Germany. The German bicycle and car manufacturer Opel (now part of GM) began developing rocket powered cars, and in 1928 Fritz von Opel drove the Opel-RAK.1 on a racetrack. In 1929 this design was extended to the Opel-Sander RAK 1-airplane, which crashed during its first flight in Frankfurt. In the Soviet Union, the Gas dynamics Laboratory in Leningrad under the directorship of Valentin Glushko built more than 100 different engine designs, experimenting with different fuel injection techniques.

    A cross-section of the V2 rocket (Wikimedia Commons).
    A cross-section of the V-2 rocket

    Under the directorship of Wernher von Braun and Walter Dornberger, the Verein for Raumschiffahrt or Society for Space Travel played a pivotal role in the development of the Vergeltungswaffe 2, also known as the V-2 rocket, the most advanced rocket of its time. The V-2 rocket burned a mixture of alcohol as fuel and liquid oxygen as oxidiser, and it achieved great amounts of thrust by considerably improving the mass flow rate of fuel to about 150 kg (380 lb) per second. The V-2 featured much of the technology we see on rockets today, such as turbo pumps and guidance systems, and due to its range of around 300 km (190 miles), the V-2 could be launched from the shores of the Baltic to bomb London during WWII. The 1000 kg (2200 lb) explosive warhead fitted in the tip of the V-2 was capable of devastating entire city blocks, but still lacked the accuracy to reliably hit specific targets. Towards the end of WWII, German scientists were already planning much larger rockets, today known as Intercontinental Ballistic Missiles (ICBMs), that could be used to attack the United States, and were strapping rockets to aircraft either for powering them or for vertical take-off.

    With the fall of the Third Reich in April 1945 a lot of this technology fell into the hands of the Allies. The Allies’ rocket program was much less sophisticated such that a race ensued to capture as much of the German technology as possible. The Americans alone captured 300 train loads of V-2 rocket parts and shipped them back to the United States. Furthermore, the most prominent of the German rocket scientists emigrated to the United States, partly due to the much better opportunities to develop rocketry there, and partly to escape the repercussions of having played a role in the Nazi war machine. The V-2 essentially evolved into the American Redstone rocket which was used during the Mercury project.

    The Space Race – to the moon and beyond

    After WWII both the United States and the Soviet Union began heavily funding research into ICBMs, partly because these had the potential to carry nuclear warheads over long distances, and partly due to the allure of being the first to travel to space. In 1948, the US Army combined a captured V-2 rocket with a WAC Corporal rocket to build the largest two-stage rocket to be launched in the United States. This two-stage rocket was known as the “Bumper-WAC”, and over course of six flights reached a peak altitude of 400 kilometres (250 miles), pretty much exactly to the altitude where the International Space Station (ISS) orbits today.

    Semyorka Rocket R7 by Sergei Korolyov in VDNH Ostankino RAF0540
    The Vostok rocket based on the R-7 ICBM

    Despite these developments the Soviets were the first to put a man-made object orbit into space, i.e. an artificial satellite. Under the leadership of chief designer Sergei Korolev, the V-2 was copied and then improved upon in the R-1, R-2 and R-5 missiles. At the turn of 1950s the German designs were abandoned and replaced with the inventions of Aleksei Mikhailovich Isaev which was used as the basis for the first Soviet ICBM, the R-7. The R-7 was further developed into the Vostok rocket which launched the first satellite, Sputnik I, into orbit on October 4, 1957, a mere 12 years after the end of WWII. The launch of Sputnik I was the first major news story of the space race. Only a couple of weeks later the Soviets successfully launched Sputnik II into orbit with dog Laika onboard.

    One of the problems that the Soviets did not solve was atmospheric re-entry. Any object wishing to orbit another planet requires enough speed such that the gravitational attraction towards the planet is offset by the curvature of planet’s surface. However, during re-entry, this causes the orbiting body to literally smash into the atmosphere creating incredible amounts of heat. In 1951, H.J. Allen and A.J. Eggers discovered that a high drag, blunted shape, not a low-drag tear drop, counter-intuitively minimises the re-entry effects by redirecting 99% of the energy into the surrounding atmosphere. Allen and Eggers’ findings were published in 1958 and were used in the Mercury, Gemini, Apollo and Soyuz manned space capsules. This design was later improved upon in the Space Shuttle, whereby a shock wave was induced on the heat shield of the Space Shuttle via an extremely high angle of attack, in order to deflect most of the heat away from the heat shield.

    The United States’ first satellite, Explorer I, would not follow until January 31, 1958. Explorer I weighed about 30 times less than the Sputnik II satellite, but the Geiger radiation counters on the satellite were used to make the first scientific discovery in outer space, the Van Allen Radiation Belts. Explorer I had originally been developed as part of the US Army, and in October 1958 the National Advisory Committee for Aeronautics (NACA, now NASA) was officially formed to oversee the space program. Simultaneously, the Soviets developed the Vostok, Soyuz and Proton family of rockets from the original R-7 ICBM to be used for the human spaceflight programme. In fact, the Soyuz rocket is still being used today, is the most frequently used and reliable rocket system in history, and after the Space Shuttle’s retirement in 2011 became the only viable means of transport to the ISS. Similarly, the Proton rocket, also developed in the 1960s, is still being used to haul heavier cargo into low-Earth orbit.

    The Soyuz rocket in transport to the launch site
    The Soyuz rocket in transport to the launch site

    Shortly after these initial satellite launches, NASA developed the experimental X-15 air-launched rocket-propelled aircraft, which, in 199 flights between 1959 and 1968, broke numerous flying records, including new records for speed (7,274 kmh or 4,520 mph) and altitude records (108 kmh or 67 miles). The X-15 also provided NASA with data regarding the optimal re-entry angles from space into the atmosphere.

    The next milestone in the space race once again belonged to the Soviets. On April 12, 1961, the cosmonaut Yuri Gagarin became the first human to travel into space, and as a result became an international celebrity. Over a period of just under two hours, Gagarin orbited the Earth inside a Vostok 1 space capsule at around 300 km (190 miles) altitude, and after re-entry into the atmosphere ejected at an altitude of 6 km (20,000 feet) and parachuted to the ground. At this point Gagarin became the most famous Soviet on the planet, travelling around the world as a beacon of Soviet success and superiority over the West.

    Shortly after Gagarin’s successful flight, the American astronaut Alan Shepherd reached a suborbital altitude of 187 km (116 miles) in the Freedom 7 Mercury capsule. The Redstone ICBM that was used to launch Shephard from Cape Caneveral did not quite have the power to send the Mercury capsule into orbit, and had suffered a series of embarrassing failures prior to the launch, increasing the pressure on the US rocket engineers. However, days after Shephard’s flight, President John F. Kennedy delivered the now famous words before a joint session in Congress

    “This nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to the Earth.”

    Despite the bold nature of this challenge, NASA’s Mercury project was already well underway in developing the technology to put the first human on the moon. In February 1962, the more powerful Atlas missile propelled John Glenn into orbit, and thereby restored some form of parity between the USA and the Soviet Union. The last of the Mercury flights were scheduled for 1963 with Gordon Cooper orbiting the Earth for nearly 1.5 days. The family of Atlas rockets remains one of the most successful to this day. Apart from launching a number of astronauts into space during the Mercury project, the Atlas has been used for bringing commercial, scientific and military satellites into orbit.

    Following the Mercury missions, the Gemini project made significant strides towards a successful Moon flight. The Gemini capsule was propelled by an even more power ICBM, the Titan, and allowed astronauts to remain in space for up to two weeks, during which astronauts had the first experience with space-walking, and rendezvous and docking procedures with the Gemini spacecraft. An incredible ten Gemini missions were flown throughout 1965-66. The high success rate of the missions was testament to the improving reliability of NASA’s rockets and spacecraft, and allowed NASA engineers to collect invaluable data for the coming Apollo Moon missions. The Titan missile itself, remains as one of the most successful and long-lived rockets (1959-2005), carrying the Viking spacecraft to Mars, the Voyager probe to the outer solar system, and multiple heavy satellites into orbit. At about the same time, around the early 1960s, an entire family of versatile rockets, the Delta family, was being developed. The Delta family became the workhorse of the US space programme achieving more than 300 launches with a reliability greater than 95% percent! The versatility of the Delta family was based on the ability to tailor the lifting capability, using different interchangeable stages and external boosters that could be added for heavier lifting.

    At this point, the tide had mostly turned. The United States had been off to a slow start but had used the data from their early failures to improve the design and reliability of their rockets. The Soviets, while being more successful initially, could not achieve the same rate of launch success and this significantly hampered their efforts during the upcoming race to the moon.

    The Delta 4 rocket family (Photo Credit: United Launch Alliance)
    The Delta 4 rocket family (Photo Credit: United Launch Alliance)

    To get to the moon, a much more powerful rocket than the Titan or Delta rockets would be needed. This now infamous rocket, the 110.6 m (330 feet) tall Saturn V (check out this drawing), consisted of three separate main rocket stages; the Apollo capsule with a small fourth propulsion stage for the return trip; and a two-staged lunar lander, with one stage for descending onto the Moon’s surface and the other for lifting back off the Moon. The Saturn V was largely the brainchild and crowning achievement of Wernher von Braun, the original lead developer of the V-2 rocket in WWII Germany, with a capability of launching 140,000 kg (310,000 lb) into low-Earth orbit and 48,600 kg (107,100 lb) to the Moon. This launch capability dwarfed all previous rockets and to this day remains the tallest, heaviest and most powerful rocket ever built to operational flying status (last on the chart at the start of the piece). NASA’s efforts reached their glorious climax with the Apollo 11 mission on July 20, 1969 when astronaut Neil Armstrong became the first man to set foot on the Moon, a mere 11.5 years after the first successful launch of the Explorer I satellite. The Apollo 11 mission became the first of six successful Moon landings throughout the years 1969-1972. A smaller version of the moon rocket, the Saturn IB, was also developed and used for some of the early Apollo test missions and later to transport three crews to the US space station Skylab.

    The Space Shuttle

    Space shuttle launch
    The Space Shuttle “Discovery”

    NASA’s final major innovation was the Space Shuttle. The idea behind the Space Shuttle was to design a reusable rocket system for carrying crew and payload into low-Earth orbit. The rationale behind this idea is that manufacturing the rocket hardware is a major contributor to the overall launch costs, and that allowing different stages to be destroyed after launch is not cost effective. Imagine having to throw away your Boeing 747 or Airbus A380 every time you fly from London to New York. In this case ticket prices would not be where they are now. The Shuttle consisted of a winged airplane-looking spacecraft that was boosted into orbit by liquid-propellant engines on the Shuttle itself, fuelled from a massive orange external tank, and two solid rocket booster attached to either side. After launch, the solid-rocket boosters and external fuel tank were jettisoned, and the boosters recovered for future use. At the end of a Shuttle mission, the orbiter re-entered Earth’s atmosphere, and then followed a tortuous zig-zag course, gliding unpowered to land on a runway like any other aircraft. Ideally NASA promised that the Shuttle was going to reduce launch costs by 90%. However, crash landings of the solid rocket boosters in water often damaged them beyond repair, and the effort required to service the orbiter heat shield, inspecting each of the 24,300 unique tiles separately, ultimately led to the cost of putting a kilogram of payload in orbit to be greater than for the Saturn V rocket that preceded it. The five Shuttles, the Endeavour, Discovery, Challenger, Columbia and Atlantis, completed 135 missions between 1981 and 2011 with the tragic loss of the Challenger in 1983 and the Columbia in 2003. While the Shuttle facilitated the construction of the International Space Station and the installation of the Hubble space telescope in orbit, the ultimate goal of economically sustainable space travel was never achieved.

    However, this goal is now on the agenda of commercial space companies such as SpaceX, Reaction Engines, Blue Origin, Rocket Lab and the Sierra Nevada Corporation.

    New approaches

    After the demise of the Space Shuttle programme in 2011, the US’ capability of launching humans into space was heavily restricted. NASA is currently working on a new Space Launch System (SLS), the aim of which is to extend NASA’s range beyond low-Earth orbit and further out into the Solar system. Although the SLS is being designed and assembled by NASA, other partners such as Boeing, United Launch Alliance, Orbital ATK and Aerojet Rocketdyne are co-developing individual components. The SLS specification as it stands would make it the most powerful rocket in history and the SLS is therefore being developed in two stages (reminiscent of the Saturn IB and Saturn V rocket). First, a rocket with a payload capability of 70 metric tons (175,000 lb) is being developed from components of previous rockets. The goal of this heritage SLS is to conduct two lunar flybys with the Orion spacecraft, one unmanned and the other with a crew. Second, a more advanced version of the SLS with a payload capability of 130 metric tons (290,000 lb) to low-earth orbit, about the same payload capacity and 20% more thrust than the Saturn V rocket, is deemed to carry scientific equipment, cargo and the manned Orion capsule into deep space. The first flight for an unmanned Orion capsule on a trip around the moon is planned for 2018, while manned missions are expected by 2021-2023. By 2026 NASA plans to send a manned Orion capsule to an asteroid previously placed into lunar orbit by a robotic “capture-and-place” mission.

    NASA’s upgrade plan for the SLS

    However, with the commercialisation of space travel new incumbents are now working on even more daunting goals. The SpaceX Falcon 9 rocket has proven to be a very reliable launch system (with a current success rate of 20 out of 22 launches). Furthemore, SpaceX was the first private company to successfully launch and recover an orbital spacecraft, the Dragon capsule, which regularly supplies the ISS with supplies and new scientific equipment. Currently, the US relies on the Russian Soyuz rocket to bring astronauts to the ISS but in the near future manned missions are planned with the Dragon capsule. The Falcon 9 rocket is a two-stage-to-orbit launch vehicle comprised of nice SpaceX Merlin rocket engines fuelled by liquid oxygen and kerosene with a payload capacity of 13 metric tons (29,000 lb) into low-Earth orbit. There have been three versions of the Falcon 9, v1.0 (retired), v1.1 (retired) and most recently the partially reusable full thrust version, which on December 22, 2015 used propulsive recovery to land the first stage safely in Cape Canaveral. To date, efforts are being made to extend the landing capabilities from land to sea barges. Furthermore, the Falcon Heavy with 27 Merlin engines (a central Falcon 9 rocket with two Falcon 9 first stages strapped to the sides) is expected to extend SpaceX’s lifting capacity to 53 metric tons into low-Earth orbit, making it the second most powerful rocket in use after NASA’s SLS. First flights of the Falcon Heavy are expected for late this year (2016). Of course, the ultimate goal of SpaceX’s CEO Elon Musk, is to make humans a multi planetary species, and to achieve this he is planning to send a colony of a million humans to Mars via the Mars Colonial Transporter, a space launch system of reusable rocket engines, launch vehicles and space capsules. SpaceX’s Falcon 9 rocket already has the lowest launch costs at $60 million per launch, but reliable re-usability should bring these costs down over the next decade such that a flight ticket to Mars could become enticing for at least a million of the richest people on Earth (or perhaps we could sell spots on “Mars – A Reality TV show“).

    When will this become reality?
    When will this become reality?

    Blue Origin, the rocket company of Amazon founder Jeff Bezos, is taking a similar approach of vertical takeoff and landing to re-usability and lower launch costs. The company is on an incremental trajectory to extend its capabilities from suborbital to orbital flight, led by its motto “Gradatim Ferocity” (latin for step by step, ferociously). Blue Origin’s New Shepard rocket underwent its first test flight in April 2015. In November 2015 the rocket landed successfully after a suborbital flight to 100 km (330,000 ft) altitude and this was extended to 101 km (333,000 ft) in January 2016. Blue hopes to extend its capabilities to human spaceflight by 2018.

    Reaction Engines is a British aerospace company conducting research into space propulsion systems focused on the Skylon reusable single-stage-to-orbit spaceplane. The Skylon would be powered by the SABRE engine, a rocket-based combined cycle, i.e. a combination of an air-breathing jet engine and a rocket engine, whereby both engines share the same flow path, reusable for about 200 flights. Reaction Engines believes that with this system the cost of carrying one kg (2.2 lb) of payload into low-earth orbit can be reduced from the $1,500 today (early 2016) to around $900. The hydrogen-fuelled Skylon is designed to take-off from a purpose built runway and accelerate to Mach 5 at 28.5 km (85,500 feet) altitude using the atmosphere’s oxygen as oxidiser. This air-breathing part of the SABRE engine works on the same principles as a jet engine. A turbo-compressor is used to raise the pressure ratio of the incoming atmospheric air, which is pre-staged by a pre-cooler to cool the hot air impinging on the engine at hypersonic speeds. The compressed air is fed into a rocket combustion chamber where it is ignited with liquid hydrogen. As in a standard jet engine, a high pressure ratio is crucial to pack as much of the oxidiser into the combustion chamber and increase the thrust of the engine. As the natural source of oxygen runs out at high altitude, the engines switch to the internally stored liquid oxygen supplies, transforming the engine into a closed-cycle rocket and propelling the Skylon spacecraft into orbit. The theoretical advantages of the SABRE engine is its high fuel efficiency and low mass, which facilitate the single-stage-to-orbit approach. Reminiscent of the Shuttle, after deploying the its payload of up to 15 tons (38,000 lb), the Skylon spacecraft would then re-enter the atmosphere protected by a heat shield and land on a runway. The first ground tests of the SABRE engine are planned for 2019 and first unmanned test flights are expected for 2025.

    SABRE rocket engine
    SABRE rocket engine

    Sierra Nevada Corporation is working alongside NASA to develop the Dream Chaser spacecraft for transporting cargo and up to seven people to low-earth orbit. The Dream Chaser is designed to launch on top of the Atlas V rocket (in place of the nose cone) and land conventionally by gliding onto a runway. The Dream Chaser looks a lot like a smaller version of the Space Shuttle, so intuitively one would expect the same cost inefficiencies as for the Shuttle. However, the engineers at Sierra Nevada say that two changes have been made to the Dream Chaser that should reduce the maintenance costs. First, the thrusters used for attitude control are ethanol-based, and therefore not toxic and a lot less volatile than the hydrazine-based thursters used by the Shuttle. This should allow maintenance of the Dream Chaser to ensue immediately after landing and reduce the time between flights. Second, the thermal protection system is based on an ablative tile that can survive multiple flights and can be replaced in larger groups rather than tile-by-tile. The Dream Chaser is planned to undergo orbital test flights in November 2016.

    Dream Chaser pre-drop tests.6
    The Dream Chaser

    Finally, the New Zealand-based firm Rocket Lab is developing the all-carbon composite liquid-fuelled Electron rocket with a payload capability to low-Earth orbit of 110 kg (240 lb). Thus, Rocket Lab is focusing on high-frequency rocket launches to transport low-mass payload, e.g. nano satellites, into orbit. The goal of Rocket Lab is to make access to space frequent and affordable such that the rapidly evolving small-scale satellites that provide us with scientific measurements and high-speed internet can be launched reliably and quickly. The Rocket Lab system is designed to cost $5 million per launch at 100 launches a year and use less fuel than a flight on a Boeing 737 from San Francisco to Los Angeles. A special challenge that Rocket Lab is facing is the development of the all-carbon composite liquid oxygen tanks to provide the mass efficiency required for this high fuel efficiency. To date the containment of cryogenic (super cold) liquid fuels, such as liquid hydrogen and liquid oxygen, is still the domain of metallic alloys. Concerns still exist about potential leaks due to micro cracks developing in the resin of the composite at cryogenic temperatures. In composites, there is a mismatch between the thermal expansion coefficients of the reinforcing fibre and the resin, which induces thermal stresses as the composite is cooled to cryogenic temperatures from its high temperature/high pressure curing process. The temperature and pressure cycles during the liquid oxygen/hydrogen fill-and-drain procedures then induces extra fatigue loading that can lead to cracks permeating through the structure through which hydrogen or oxygen molecules can easily pass. This leaking process poses a real problem for explosion.

    Where do we go from here?

    As we have seen, over the last 2000 years rockets have evolved from simple toys and military weapons to complex machines capable of transporting humans into space. To date, rockets are the only viable gateway to places beyond Earth. Furthermore, we have seen that the development of rockets has not always followed a uni-directional path towards improvement. Our capability to send heavier and heavier payloads into space peaked with the development of the Saturn V rocket. This great technological leap was fuelled, to a large extent, by the competitive spirit of the Soviet Union and the United States. Unprecedented funds were available to rocket scientists on both sides during the 1950-1970s. Furthermore, dreamers and visionaries such as Jules Verne, Konstantin Tsiolkovsky and Gene Roddenberry sparked the imagination of the public and garnered support for the space programs. After the 2003 Columbia disaster, public support for spending taxpayer money on often over-budget programs understandably waned. However, the successes of incumbent companies, their fierce competition and visionary goals of colonising Mars are once again inspiring a younger generation. This is, once again, an exciting time for rocketry.

  • Big Data in Aerospace

    “Big data” is all abuzz in the media these days. As more and more people are connected to the internet and sensors become ubiquitous parts of daily hardware an unprecedented amount of information is being produced. Some analysts project 40% growth in data over the next decade, which means that in a decade 30 times the amount of data will be produced than today. Given this this trend, what are the implications for the aerospace industry?

    Big Data
    Big data: According to Google a “buzzword to describe a massive volume of both structured and unstructured data that is so large that it’s difficult to process using traditional database and software techniques.”

    Fundamentally, big data is nothing new for the aerospace industry. Sensors have been collecting data on aircraft for years ranging from binary data such as speed, altitude and stability of the aircraft during flight, to damage and crack growth progression at service intervals. The authorities and parties involved have done an incredible job at using routine data and data gathered from failures to raise safety standards.
    What exactly does “big data” mean? Big data is characterised by a data stream that is high in volume, high velocity and coming from multiple sources and in a variety of forms. This combination of factors makes analysing and interpreting data via a live stream incredibly difficult, but such a capability is exactly what is needed in the aerospace environment. For example, structural health monitoring has received a lot of attention within research institutes because an internal sensory system that provides information about the real stresses and strains within a structure could improve prognostics about the “health” of a part and indicate when service intervals and replacements are needed. Such a system could look at the usage data of an aircraft and predict when a component needs replacing. For example, the likelihood that a part will fail could be translating into an associated repair that is the best compromise in terms of safety and cost. Furthermore, the information can be fed back to the structural engineers to improve the design for future aircraft. Ideally you want to replicate the way the nervous system uses pain to signal damage within the body and then trigger a remedy. Even though structural health monitoring systems are feasible today, analysing the data stream in real time and providing diagnostics and prognostics remains a challenge.
    Other areas within aerospace that will greatly benefit from insights gleaned from data streams are cyber security, understanding automation and the human-machine interaction, aircraft under different weather and traffic situations and supply chain management. Big data could also serve as the underlying structure that establishies autonomous aircraft on a wide scale. Finally, big data opens the door for a new type of adaptive design in which data from sensors are used to describe the characteristics of a specific outcome, and a design is then iterated until the desired and actual data match. This is very much an evolutionary, trail-and-error approach that will be invaluable for highly complex systems where cause and effect are not easily correlated and deterministic approaches are not possible. For example, a research team may define some general, not well defined hypothesis about a future design or system they are trying to understand, and then use data analytics to explore the available solutions and come up with initial insights into the governing factors of a system. In this case it is imperative to fail quickly and find out out what works and what does not. The algorithm can then be refined iteratively by using the expertise of an engineer to point the computer in the right direction.
    Thus, the main goal is to turn data into useful, actionable knowledge. For example in the 1990’s very limited data existed in terms of understanding the airport taxi-way structure. Today we have the opposite situation in that we have more data than we can actually use. Furthermore, not only the quantity but also quality of data is increasing rapidly such that computer scientists are able to design more detailed models to describe the underlying physics of complex systems. When converting data to actionable information one challenge is how to account for as much of the data as possible before reaching a conclusion. Thus, a high velocity, high volume and diverse data stream may not be the most important characteristic for data analytics. Rather it is more important that the data be relevant, complete and measurable. Therefore good insights can also be gleaned from smaller data if the data analytics is powerful.
    While aerospace is neither search nor social media, big data is incredibly important because the underlying stream from distributed data systems on aircraft or weather data systems can be aggregated and analysed in consonance to create new insights for safety. Thus, in the aerospace industry the major value drivers will be data analytics and data science, which will allow engineers and scientists to combine datasets in new ways and gain insights from complex systems that are hard to analyse deterministically. The major challenge is how to upscale the current systems into a new era where the information system is the foundation of the entire aerospace environment. In this manner data science will transform into a fundamental pillar of aerospace engineering, alongside the classical foundations such as propulsion, structures, control and aerodynamics.
  • Jet Engine Detail Design: The Compressor

    In this post the design of jet engine compressors will be discussed leading to the definition of ballpark performance parameters. For smaller engines centrifugal (CF) compressors are used since they can handle smaller flow rates more effectively and are more compact than axial compressors. Axial compressors however have the advantage of a smaller frontal area for a given flow rate, can handle higher flow rates and generally have higher efficiencies than CF compressors. For larger turbines used on civil aircraft the most suitable compressor and turbine will be of the axial type. Early axial compressors were able to raise the pressure of the incoming area around 5-fold, while modern turbofan engines have pressure ratios in excess of 30:1.

    Low pressure axial compressor scheme of the Olympus BOl.1 turbojet. (Photo Credit: Wikipedia)
    Low pressure axial compressor scheme of the Olympus BOl.1 turbojet

    Because the pressure rises in the direction of flow through the compressor there is an acute risk of the boundary layers separating on the compressor blades as they encounter this adverse pressure gradient. When this happens the performance of the compressor drops dramatically and compressor is said to stall. For this reason the compression is spread over a large number of compressor stages such that the smaller incremental increases in pressure across each stage allow engineers to obtain a large overall pressure ratio without incurring stall. A stage consists of a row of rotating blades called the rotor and a row of stationary blades called the stator. Each of these rows may consist of between 30–100 distinct blades and there may be up to 20 stages between the air inlet and compressor outlet. The role of the rotor blades is to accelerate the incoming air in order to increase the kinetic energy of the fluid. Across the stators the fluid is then decelerated and as a consequence the fluid pressure is increased. As the pressure and density increase across each stage the overall flow velocity is kept relatively constant by reducing the height of the blades from stage to stage. Thus the compressor tapers down from inlet to outlet.

    In an attempt to reduce the number of compressor stages for a more compact engine, a designer’s goal is to maximise the pressure ratio across each stage. The stage pressure ratio RR is given by the following expression,

    Rs=[1+ηsUCacpT01(tanb1tanb2)]γγ1R_s=\left[1+\eta_s\frac{UC_a}{c_pT_{01}}(\tan b_1 – \tan b_2)\right]^{\frac{\gamma}{\gamma-1}}

    Where ηs\eta_s is the stage isentropic efficiency, T01T_{01} is the total (stagnation) temperature, UU the rotary speed of the compressor, CaC_a the axial speed of the fluid, cpc_p the coefficient of latent fusion at constant pressure, and b1b_1 and b2b_2 the angle of the rotor blade leading and trailing edge relative to the axial flow direction.

     

    Diagram of an axial flow compressor. (Photo Credit: Wikipedia)
    Diagram of an axial flow compressor

     The pressure ratio across each stage can be maximised by increasing the rotary speed of the compressor UU, the angle through which the fluid is turned across the rotor blades tanb1tanb2\tan b_1 – \tan b_2 and the axial speed of the fluid CaC_a through the compressor. However there is a limit on the extent of these three parameters.

    1. The blade tip speed and therefore UU is limited by stress considerations at the root. If the fan is assumed to be of constant cross-sectional area then the centrifugal stress at the root is given by,

    σr=rrrtρbΩ2rdr=0.5ρbUt2(1(rrrt)2)\sigma_r=\int_{r_r}^{r_t}\rho_b\Omega^2 r dr = 0.5\rho_b U_t^2\left(1-\left(\frac{r_r}{r_t}\right)^2\right)

    Where UtU_t is the tip speed, ρb\rho_b is the density of the blade, and the ratio rr/rtr_r/r_t is called the root-to-tip ratio of the blade. To prevent the blades from detaching from the hub and destroying the engine this root stress is not allowed to exceed a certain proof stress. It can be seen that the root stress is proportional to the square of the compressor rotational velocity and decreases as the blade length becomes shorter. Since the first compressor blades have the highest blade lengths they limit the maximum tip speed and therefore the efficiency of the compressor. It is therefore common to split the compressor into double or triple spool configurations such as a large fan, intermediate-pressure and high-pressure compressors that are rotating at three different speeds. In this manner the large diameter fan can rotate at lower speeds to satisfy the stress restrictions while the shorter blade high-pressure compressor may rotate at higher speeds.

    However the rotational speed of the fan is typically constrained by more stringent stress considerations. In a turbofan engine the large diameter fan at the front of the engine acts as a single-stage compressor. In modern turbofan engines the fan divides the flow with most of the air going to the bypass duct to a propelling nozzle and only a small portion going into the core. The high root stresses caused by the long fan blades are often exacerbated by bird strikes. For mechanical reasons a lower limit of root-to-tip ratio of 0.35 is often employed. The flow impinging onto the fan is also at a very high Mach number since the cruising speed of civil aircraft is typically around M = 0.83. Supersonic flow inevitably terminates in a shock wave with a resulting increase in pressure and entropy over the compressor blades. Shock waves reduce the efficiency of the compressor blades since they disturb the flow over the profile that lead to boundary layer separation. Furthermore, these shock waves may cause unwanted vibrations of the fan blades that further reduce the efficiency of the compressor and increase noise. Therefore for reasons of efficiency, reducing noise and limiting the damage of bird strikes the tip speed of the fan is restricted, typically a relative Mach number of 1.6 is considered as the upper limit.

    2. The axial speed CaC_a has to be maximised to optimise the pressure ratio and reduce the frontal area of the engine. Similar to the argument given above the axial speed is typically limited by compressibility effects of supersonic flow. As the pressure, static temperature and therefore the speed of sound increases from stage to stage, the compressibility effects are worst in the first stages. For the first stage the air enters axially such that by adding the orthogonal velocity vectors UU and CaC_a we get V2=U2+Ca2V^2 = U^2 + C_a^2 where V is the speed relative to the blade. In modern engines VV may be in the transonic region incurring quite large losses. In this respect twin-spool engines have the advantage that the lower-pressure compressor rotates at a lower speed, which reduces the compressibility problem.

    3. The angle through which the fluid is turned across the rotor blades b is limited by the growth of the boundary layers. Compressor blades are aerofoils that function in the same manner as aeroplane wings. Therefore as the angle of attack or camber of aerofoil is increased to increase the rotation of the flow velocity vector, the adverse pressure gradient across the suction surface increases, until at some point the boundary layer separates. As the boundary layer separates the effective turning angle b is reduced such that the total pressure increase across the stage reduces.

    The limits of UU, CaC_a and b1b2b_1-b_2 all place limits on the maximum pressure ratio that can be achieved in an axial compressor. Typical examples are UU \approx 350 m/s, CaC_a = 200 m/s, b1b2<b_1 – b_2 <45°.

    Compressor blades are typically quite thin and are constructed from lightweight metallic alloys such as aluminium and titanium. The compressor blades feature an aerofoil section as shown in the Figure below. The centrifugal forces that act on the airflow are balanced by high-pressure air towards the tip of the blade. In order to obtain this higher pressure towards the tip the blade must be twisted from root to tip in order to change the angle of incidence on the flow, and therefore control the pressure variation over the blade.

     

    Key References

    Rolls-Royce (1996). The Jet Engine. Rolls Royce Technical Publications; 5th ed. edition

  • Thrust Reversal

    In a typical turbofan jet engine the oncoming airflow is compressed throughout a series of compressor stages, mixed with a fuel (typically kerosene) and combusted, drastically increasing pressure and temperature, and then expanded through a nozzle to provide thrust towards the rear of the aircraft. By accelerating the fluid towards aft, Newton’s Third Law implies that this impulse must be reacted by an equal and opposite force in the opposite direction, thus propelling the aircraft forward. However, modern jet engines are also capable of producing thrust in the opposing direction. How is this possible without completely changing the direction of airflow from the exhaust to the intake which would seriously damage various engine components?

    Diagram of a typical gas turbine jet engine. Air is compressed by the fan blades as it enters the engine, and it is mixed and burned with fuel in the combustion section. The hot exhaust gases provide forward thrust and turn the turbines which drive the compressor fan blades.

    Thrust reversal is achieved by momentarily diverting the hot exhaust gases towards the front of the aircraft or changing the propeller/compressor pitch so that the thrust produced is directed forward. Thus thrust will act against the forward direction of travel and provide a means of deceleration. Thrust reversal is used in some flight scenarios in order to,

    • Alleviate the stress and reduce wear on the brakes or to enable shorter landing distances. Reverse thrust can reduce the braking distance by a third or more!
    • Momentarily increase the braking force during emergencies or just after touchdown when the aircraft is still traveling at a high velocity and the residual aerodynamic lift is significant. Lift reduces the normal reaction force with the ground and therefore limits friction and grip on the tyres.
    • Rapid deceleration in flight to enable quick changes of speed. Most aircraft cannot operate thrust reversal in flight and the majority that can are propeller-driven.
    • Helping to push an aircraft back from a gate. A maneuver called “powerback”.

    Almost everyone who has sat in a row near the wings will have heard reverse thrust in action before. Next time you land wait for the sudden high-pitched increase in engine noise just after touchdown.

    The method to achieve thrust reversal varies greatly between the different types of engines:

    • Since the 1930s propeller-driven aircraft generate reverse thrust by changing the angle of attack of their controllable pitch propellers:
      • Older reciprocating engines and modern turboprop engines both have the ability to set the propeller angle to “flat pitch”. As a result the propellor airfoils produce no forward or reverse thrust, but large amounts of drag instead. This allows the engine speed to be kept at a constant speed while descending.
      • The classic approach is to pitch the propeller blades to a negative angle of attack in order to direct the thrust forward.
    • In jet engines thrust reversal is not accomplished by running the engine in reverse but by diverting the high-velocity exhaust jet blast to the front of the engines. This can be achieved in different ways:
      1. The target-type thrust reverser: After the combustion chamber, reverser blades angle outward in order to prematurely redirect the high-speed jet radially outwards and towards the front of the engine. This construction generally gives the appearance of flower petals.
      2. The clamshell type: Two reverser buckets are hinged at the aft of the engine, and when deployed, intrude into the exhaust of the engine. In this manner the jet blast is captured and re-oriented towards the front.
      3. In a turbofan engine some of the air intake is not passed through the main part of the engine, but redirected along an outside channel without being combusted. This bypass duct is aptly named “cold flow” and this arrangement is used to save fuel and reduce engine noise. Furthermore, the bypass flow can also be used to channel air radially outwards and forwards to provide thrust reversal.
    The three different types of thrust mechanisms explained above [1].

    Youtube has some great videos showing thrust reversal in action.

    References and Further Reading

    [1] Purdue University. “Thrust Reversing”.  https://engineering.purdue.edu/~propulsi/propulsion/jets/basics/reverse.html

  • A Brief History of Aircraft Structures

    Aircraft have changed enormously over the last century from the early Wright Flyer flown at Kittyhawk to the supersonic SR-71 Blackbird flown today. Of course the developments in aeronautical engineering can be broken down into separate divisions that have developed at different rates: a) the aerodynamics, b) power plant engineering, c) control, radios and navigation aids, d) airframe engineering (e.g. hydraulic/electrical systems, interior fittings etc.), and finally e) the structural design. For example, power plants have developed in two large steps separated by a series of sudden burst of ingenuity. In order to facilitate the first successful flight the Wright Brothers had to find a light yet powerful engine system. The next stride was the ingenious invention of the jet engine prior and during WWII by Sir Frank Whittle and Hans von Ohain. In between, the power output of piston engines “increased almost 200 times from 12 bhp to over 2000 bhp in just 40 years, with only a ten times increase in mass (3) “. As will be outlined in this article, the design of aerospace structures on the other hand has only made one fundamental stride forward, but this change was sufficient to change the complete design principle of modern aircraft. Today however, the strict environmental legislation and advent of the composite era may induce further leaps in structural design.

    Fig. 1. A schematic drawing of the Wright Flyer (1)

    Fig. 2. The modern supersonic SR-71 Blackbird (2)

    1) Wire Braced Structures

    If we look at the early design of aircraft such as the Wright Flyer in Figure 1 there can really be no misunderstanding of the construction style. The entire aircraft, including most notably the wings, forward and rear structures were all constructed from rectangular frames that were prevented from shearing (forming a parallelogram) or collapsing by diagonally stretched wire. There were two major innovative thoughts behind this design philosophy. Firstly, the idea that two parallel wings would facilitate a lighter yet stronger structure than a single wing, and secondly, that these two wings could be supported with two light wires rather than with a single, thicker wooden member. The structural advantage of the biplane construction is that the two wings, vertical struts and wires form a deep light beam, which is more resistant to bending and twisting than a single wing. Much like a composite sandwich beam it can be treated as two stiff outer skins for high bending rigidity connected by a lightweight “core” to provide resistance to shear and torsion.

    Fig. 3. Cutaway drawing of the 1917 Sopwith Camel (3)

    Fig. 4. Cutaway drawing of the 1935 Hawker Hurricane (3)

    The biplane construction with wire bracing was the most notable feature of aircraft construction for much of the following years and paired nicely with lightweight materials such as bamboo and spruce (Figure 3). Wood is a composite of cellulose fibres embedded in a matrix of lignin and the early aeronautical engineers knew to take advantage of its high specific strength and stiffness. Strangely enough, after the era of metals we are now returning back to the composite roots of aircraft, albeit in a more advanced fashion. The biplane era lasted until the 1930s at which point metal was taking over as the prime aerospace material. Initially the design philosophy was not adapted to take full advantage of thin sheet metal manufacturing techniques such that wooden spars and struts were just replaced by thinner metal tubing. Consequently there remained a striking similarity in construction between a 1917 (Figure 3) and a 1931 (Figure 4) fighter. Even though some thin metal sheets were being used these components generally did not carry much load such that the main fuselage structure featured 4 horizontal longerons supported by vertical struts and wire bracing. This so called “Warren Girder” design can also be seen in some of earliest monoplane wing constructions such as the 1935 Hawker Hurricane. Aeronautical engineers were initially “unsure how to combine the new metal construction with a traditional fabric covering (3)” used on earlier aircraft. The onset of WWII meant that some safe and conservative design decisions were made to facilitate monoplane wings and the “Warren Girder” principle was directly copied to the internal framework of monoplane wings (Figure 5). These early designs were far from optimised and perfectly characterise the transition period between wire-frame structures and the semi-monocoque structures we use today.

    Fig. 5. The Hawker Hurricane wing construction (3).

    2) Semi-Monocoque Structures

    The internal cross-bracing was initially acceptable for the early single or double seater aircraft, but would obviously not provide enough room for larger passenger aircrafts. To overcome this, inspiration was taken from the long tradition and expertise in boat building which had already been applied to construct the fuselages of early wooden flying boats. The highest standards of yacht construction at the time featured “bent wooden frames and double or triple skins…with a clear varnished finish…and presented a much more open and usable fuselage interior (3)”. The well-established boat building techniques were thus passed on to aircraft construction to produce newer aircraft with very smooth, aerodynamic profiles.

    Fig. 6. Semi monocoque fuselage construction of an early wooden flying boat (4)

    The major advantage of this type of construction is that the outer skin of the fuselage and wing no longer just define the shape and aerodynamic profile of the aircraft, but become an active load-carrying member of the structure as well. Thus, the structure becomes “multifunctional” and more efficient, unlike the braced fuselage which would be just as strong without the fabric covering the girders. As a consequence the whole structure is generally at a uniform and lower stress level, reducing stress concentrations and giving better fatigue life. Finally, as the majority of the material is located at the outer surface of the structure the second and polar moments of area, and therefore the bending and torsional rigidities are much increased. On the other hand, the thin-skinned construction means that compression and shear buckling become the most likely forms of failure. In order to increase the critical buckling loads the skins are stiffened by stringers and broken up into smaller sections by spars and ribs.

    Fig. 7. Components of a semi monocoque wing (5)

    Because the external skin is now a working part of the structure this type of construction became to be known as stressed skin or semi-monocoque, where monocoque means  “shell in one piece” and “semi” is an english addition to describe the discrete discontinuities of internal stiffeners. The adoption of the semi-monocoque construction and a change from wood to metal naturally coincided since sheet metal production allowed a variety of thin skins to be easily manufactured quite cheaply, with better surface finish and superior material properties. Furthermore, metal construction was conducive to riveting which would overcome the adhesive problems of early wooden semi-monocoque aircraft such as the deHavilland Mosquito.

    Fig. 8. Cutaway Drawing of the recently released A400M aircraft (6).

    Figure 8 shows the typical construction of a modern aircraft. There have been numerous different structural arrangements over the past number of years but all generally feature some sort of vertical stiffener (ribs in the wings and rings in the fuselage) and longitudinal stiffener (called stringers). Over the years the main driver has been towards a) a reduction in the number of rivets by reverting to bonded assembly or ideally manufacturing separate components as a single piece and b) understanding the effects and growth of cracks under static and fatigue loading by building structures that can easily be inspected or have multiple redundancies (load paths). The design and manufacturing methods of semi-monocoque aircraft are now so automated that the development of a new aluminium, medium sized airliner “could be regarded as a routine exercise (1)”. However, the continuing legislative pressure to reduce weight and fuel consumption provides enough incentive for further development.

    3) Sandwich Structures and Composite Materials

    One of the major disadvantages of thin-skinned structures is their lack of rigidity under compressive loading which gives them a tendency to buckle. A sheet of paper nicely illustrates this point, since it is quite strong in tension but will provide no support under compression. One way of improving the rigidity of thin panels is by increasing the bending stiffness with the aid of external stiffeners, which at the same time break the structure up into smaller sections. The critical buckling load is a function of the square of the width of the plate over which the load is applied. Therefore skins can be made 4 times stronger in buckling by just cutting the width in half. As a wing bends upwards the main compressive loads act on the top skin along the length of the wing and therefore a large number of stringers are visible across the width.

    Fig. 8. Buckling analysis of a stiffened wing panel. The stiffeners break the buckling mode shapes into smaller wavelengths that require higher energy to form compared to a single wave (7)

    Another technique to provide more rigidity is sandwich construction. This generally features a very lightweight core, such as a honeycomb lattice or a foam, sandwiched between two thin yet stiff outer panels. Here the role of the sandwich core is to carry any shear loads and separate the two skins as far as possible. The second moment of area is a function of the cube of the depth and therefore the bending rigidity is greatly increased with this technique. Ideally, in this manner it would be possible to design an entire fuselage without any internal rings or stringers and the Beech Starship is an excellent example of a successful application. However, there are problems of forming honeycomb cores onto doubly curved shells since the material is susceptible to strong anticlastic curvature, forming a saddle shape when bent in one direction. Furthermore, there are problems with condensation and water ingress into the honeycomb cells and the ability to guarantee a good bond surface between the core and the outer skins. There is the possibility to use foam cores instead, but these tend to be heavier with lower mechanical properties. Perhaps the current trend is away from sandwich construction (10).

    Fig. 9. A carbon fibre composite/honeycomb sandwich panel (9)

    Fig. 10. The Beech Starship whose fuselage was design using sandwich construction with minimal internal bulkheads and ribs (8)

    One of the major applications of honeycomb structures has been in combination with composite materials. Stiff carbon composite panels are the ideal candidate for the outer skins and the whole assembly can be co-cured together in an autoclave without having to perform any secondary bonding operations. Furthermore, the incredible specific strength and stiffness of carbon composites makes this combination an  ultra lightweight yet resilient structure for aerospace applications. Indeed, we are now at the start of the “black” carbon age in commercial aircraft design. Apart from their excellent specific strength and stiffness properties composites exhibit the ability to tailor optimum mechanical properties by orientating the majority of plies in the direction of the load and allowing for less material waste during manufacture.  As a result, the first generation of commercial aircraft that contain large proportions of composite parts, such as the Boeing 787 Dreamliner and Airbus A350 XWB, are planned to enter service throughout the next years.

    Fig. 11. Considerable delamination leading to catastrophic failure (11)

    Considerable effort has been made to mature composite technology in order to reduce manufacturing costs, guarantee reliably high quality laminates, understand the highly complex failure criteria and built hierarchical, multifunctional or self-healing structures.  One of the major shortcomings is that the structural advantages of fibre-reinforced plastics must be viewed with respect to applications where the primary loads are aligned with the fibre direction. However, if a composite plate is subjected to significant out-of-plane stresses subsurface delaminations may develop between layers due to the weak through-thickness cohesive strength of the composite. These intralaminar delaminations are a significant problem as they are difficult to detect by visual inspection and may reduce the compressive strength of the laminate by up to 60%.

    4) Novel Designs

    With environmental legislation becoming ever so strict it is adamant that new concepts for lightweight and fuel efficient aircraft are found swiftly. Although the pressure on developing advanced composite materials is high it must be remembered that 100 years of innovation were required to reach the stage that large metal semi-monocoque structures could be manufactured in the 1940s and another 30 years to fully understand all failure criteria. Thus we may still require significant research and development before all current issues with composite materials are resolved. Apart from carbon fibre and other composites other researchers have been looking into completely redefining the shape of aircraft. Researchers at MIT have been developing the blended wing concept and NASA are exploring the technology of morphing or shape-changing aircraft, taking inspiration directly from nature.

    Fig. 12. Illustration of the MIT Silent Aircraft concept (12).

    Fig. 13. NASA morphing wing aircraft (13)

    Whatever the final solution might look like the next 5o years in aerospace engineering will be incredibly innovative, ground-breaking and an exciting industry to be part of!

    References

    (3) Cutler, John (1992). Understanding Aircraft Structures. 2nd Edition. Blackwell Scientific Publications, Oxford.

    (10) Potter, Kevin (1996). An Introduction to Composite Products: Design, Development and Manufacture. Springer, 5th Ed. Chapman & Hall, London.

    Images

    (1) http://www.pbs.org/wgbh/nova/wright/images/flye-lotech.gif

    (2) http://thexodirectory.com/wp-content/uploads/2011/05/Air-to-air-overhead-front-view-of-an-SR-71A-460×361.jpg

    (4) http://imgc.artprintimages.com/images/art-print/j-r-eyerman-workmen-building-flying-boat-that-was-designed-by-millionaire-howard-r-hughes_i-G-37-3793-OAAIF00Z.jpg

    (5) http://www.nomenclaturo.com/wp-content/uploads/Airplane-Wing-Part-Diagram-Terminology.png

    (6) http://pds13.egloos.com/pds/200906/24/60/a0118060_4a4194709ef22.jpg

    (7) http://www.dnv.com/binaries/PULS-buckling_tcm4-284864.JPG

    (8) http://www.bobscherer.com/Images/Pages/Starship/Starship%20page/NC-6%20Over%20Foggy%20Hills.jpg

    (9)http://upload.wikimedia.org/wikipedia/commons/3/3d/Steinbichler_Shearography_Honeycomb_with_CFRP_Top_Layer_Artificial_failures_that_simulate_layer-core_delaminations_Material.jpg

    (11) http://en.wikipedia.org/wiki/File:Delamination-CFRP.jpg

    (12) http://silentaircraft.org/

    (13) http://www.espaciolutacoot.com.mx/images/postcard/large/nave1.jpg

  • The Scramjet

    The American Air Force has reported that a test of the unmanned hypersonic X-51A “Waverider” scramjet has failed. During the test flight the aircraft disconnected successfully from the the wing of a B-52 bomber but only 16 seconds later a defect in a control fin caused the “Waverider” to spiral out of control and eventually break up over the Pacific. The test aircraft was planned to reach a top speed of 7000 km/hr and hold Mach 6 for 300 seconds. This recent event continues the series of failed tests that have plagued the project since its first flight in May 2010. Of originally four prototypes the Pentagon now has only 1 test aircraft remaining. In 2004 the older “X-43” scramjet model reached air speeds of up to Mach 10 – equal to around 11,000 km/hr.

    Scramjet released from B-52 carrier wing [1]

    The Scramjet Technology

    A scramjet, or supersonic combustion ramjet, is a development of the ramjet engine in which combustion takes places at supersonic rather than subsonic speeds. Both engine variants require high initial vehicle velocities in order to compress and decelerate the incoming air in a converging chamber. Since the airflow throughout the engine and especially the combustion process remains at supersonic air flow the scramjet can operate more efficiently at very high flight velocities.

    A model of the “X-51A Waverider” [1]

    The scramjet is solely comprised of a converging inlet, a fuel injection point and a converging nozzle. As the supersonic airflow is compressed the temperature of the fluid rises to such an extent that a simple injection of gaseous fuel is sufficient to combust the chemical with the atmospheric oxygen. The combustion process  raises the enthalpy of the fluid such that an expansion throughout the divergent exhaust nozzle leads to incredible acceleration of the air and consequently thrust. The principle of expanding a high-enthalpy fluid to generate thrust is similar to standard turbofan and turbojet engines, only that a scramjet does not use multiple rotating compressor stages in the inlet. As they lack mechanical compressors  operation of scramjets is limited to near-hypersonic velocities since the high kinetic energy of a hypersonic flow is required to compress the incoming air to operational conditions. Thus, a scramjet-powered vehicle must be accelerated to the required velocity by some other means of propulsion.

    Comparison of Turbojet, Ramjet and Scramjet [2]

    The elimination of all moving parts greatly reduces the complexity, weight and susceptibility to mechanical failure of the engine. Furthermore, in turbofans and turbojets the rotating compressors are driven by turbine stages located in the diverging nozzle. The turbine stages are powered by the accelerating exhaust gases and therefore reduce the available energy output. In turbofan and turbojet engines the energy output and thrust can be directly increased by raising the turbine entry temperature i.e. burning more fuel or guaranteeing a more efficient combustion process. Throughout the years turbine entry temperatures have approached the melting point of the turbine blade metals, thus increasing the risk of static and creep failure at the highly stressed turbine inner hub. In the past, solutions to this problem included using nickel-based superalloys, thermal barrier coatings, or casting the turbine blade as a single crystal in order to remove the deformation planes at the grain boundaries. Today almost all turbine blades also feature direct air film cooling around the blades. In this technique cooler air from the compressor stages is bled to and then through channels in the turbine blades and finally allowed to flow out through tiny holes on the turbine blade surface. Ultimately this bled air is then lost from doing any useful work as combusted air.  Finally, as turbofan and turbojet engines approach Mach 1 there is an issue with the flow becoming supersonic at the tips of the rotating compressor blades. Any supersonic flow will terminate in a shockwave that will disturb the uniformity of the flow throughout the compressor and cause pressure surges. This will reduce the efficiency of the compressor or even cause single blades to break off.

    Thus the higher efficiency and reduced complexity makes the scramjet a better solution for hypersonic propulsion. Currently the scramjet technology would facilitate sudden airborne attacks but is not yet suited for manned flight.

     

    References

    [1] http://www.faz.net/aktuell/politik/ausland/sechsfache-schallgeschwindigkeit-test-von-hyperschall-jet-fehlgeschlagen-11857425.html

    [2] Wikipedia. http://en.wikipedia.org/wiki/File:Turbo_ram_scramjet_comparative_diagram.svg

  • Composite Manufacturing – Autoclave Variability

    Throughout the last four decades the exploitation of fibre-reinforced plastics (FRP) in engineering structures has been steadily diversifying from sports equipment and high performance racing cars, to helicopters and most recently commercial aeroplanes. Composite materials are essentially a combination of two or more dissimilar materials that are used together in order to combine best properties, or impart a new set of characteristics that neither of the constituent materials could achieve on their own. Engineering composites are typically built-up from individual plies that take the form of continuous, straight fibres (eg. carbon, glass, aramid etc.) embedded in a host polymer matrix (eg. phenolic, polyester, epoxy etc.), which are laminated layer-by-layer in order to built up the final material/structure.

    All manufacturing processes are subject to a certain degree of variability. Composite materials differ from most metallic manufacturing routes in that the material is generated at the same time as the structural geometry of the part. In the aerospace industry autoclave components of pre-impregnated reinforcements are the dominant mouldings being used. In this case the hardest variable to control is the thickness dimension and this will be the major concern of this article.

    Lean manufacturing calls for variability on thickness expressed as a standard deviation of 1/6th the drawing tolerance – the “6-Sigma” tolerance band – giving a thickness defect rate of 1 in 1,000,000. In reality current thickness defect rates are in the range of 1 in 10 for composite components (1). The biggest influence on laminate thickness is the consolidation pressure. As the consolidation pressure is increased the laminate is compacted more and thus more resin may be bled out of the prepreg. As a result the volume fraction of fiber can vary from just around 50% at 1 bar consolidation to almost 70% at 6 bar. Such large variations in volume fraction will naturally influence the consolidation thickness. The external pressure “felt” by the laminate is not just a function of the target autoclave setting. Insufficient contact between the vacuum bag and the laminate and wrinkles in the bag will greatly reduce the consolidation pressure experienced by the laminate. Since the vacuum bag application is a manual process and the bagging material can be quite flimsy certain amount of wrinkling is inevitable. Thus it can be very difficult to reduce this type of variability and in the worst-case defects such as delaminated plies may occur.

    During curing the external temperature is typically ramped up in two stages and held constant in between, the so-called “dwell period”, in order to allow the actual mould temperature to catch-up and ensure full consolidation and cure. During the early parts of the cure the resin viscosity will first reduce as a result of the increasing temperature but then increase suddenly as the mould temperature reaches the gelation point and thus causes the resin to solidify. When the resin viscosity is low internal flows of resin will occur.

    Composite Consolidation Programme: Variation of Viscosity with Temperature (3)

    Around corners the difficulty of preventing fibre wrinkling or fibre bridging is added. If plies cannot slip over each other as they consolidate over inside radii, fibre bridging will occur and the laminate will get thicker in the corner. The fibres that bridge the radius will directly react the consolidation pressure leading to a reduced resin pressure beneath the bridged fibres. Resin will, therefore, tend to flow towards this region of bridged fibres but if this does not sufficiently occur high local voidage will result.

    Fibre Bridging (3)
    Fibre Wrinkling (3)

    Upon consolidation the resin will start to shrink and since it is constrained, the bridged pocket will be exerted under tensile stress. This may cause cracking in the brittle resin and thus cause internal failure before any external load has been applied on the part. Fibre bridging may be reduced by using rollers to press the fabric into the corner or by incorporating slip-lines into the layup. However, especially in the latter case this will complicate the layup and increase manufacturing times.

    Slip Lines in Layup (3)

    Equally, if plies cannot slip over external radii then fibre wrinkling or “earing” will occur. Although this will not produce a resin sink the wrinkled area will be voidy and have poorly controlled fibre orientation leading to a reduction in mechanical properties. Fibre wrinkling may also be exacerbated by wrinkles in the vacuum bag over the corner.

    Taking the example of the component below the real laminate thickness and target thickness can be widely different. In zone 1 the laminate is likely to be thinner as a result of resin bleeding out of the component unless some sort of resin dam is used. Zones 3 and 5 are likely to be thinner due to resin flow from these areas into the resin sink over the internal radii at zone 4. Ideally the effects of internal and external radii would cancel out at zones 4 and 5 but inaccuracies in the layup or induced tensions in the plies will typically mitigate this. The most critical section of the component is undoubtedly zone 6, where high voidage is very likely due to the difficulty of bleeding sufficient resin into the area and the two adjacent internal radii.

    Thickness Variation in Composite Moulding (3)

    Thickness deviations are only one form of variability. Other defects may stem from part design, manufacturing design, the lay-up process or the autoclave process. To produce reliable components with tightly toleranced dimensions lay-ups are typically made balanced (equal number of ±Angle° plies) and symmetric about the mid-plane in order to avoid thermally induced distortions. Unbalanced or unsymmetric laminates manufactured as plates on flat tools will warp and twist as a result of the different thermal expansion coefficients of different layers. However, if the resin content varies between different plies the thermal properties will naturally vary and the laminate will be unbalanced. For a typical pre-preg the weight/unit area tolerance limits can be up to 5% on both pre-preg and fibre weight, and resin contents may even have a slightly wider tolerance band (1). Considering that resin and fibre contents directly influence the mechanical properties of the composite it can be quite challenging to decrease variability and guarantee reliable components with such a wide tolerance band.

    Additional distortions arise if aluminium or steel tooling is used. Metal tools have higher coefficients of thermal expansion than composites and cure in the autoclave can occur at elevated temperatures of typically 180°C. Therefore the tooling will expand more than the composite, putting strains onto the outermost ply. These surface strains may be exacerbated by local features such as a corners and joggles.

    A considerable amount of variability around corners is the so-called “spring-in” effect. As the laminate cools down from cure it will contract far more through the thickness than in plane. In order to maintain continuity of the profile without causing residual stresses the corner angles will close up. This can result in changes of corner angle of about 1° for 150°C change in temperature. Other defects such as fibre wrinkling or bridging will worsen this effect. In general it is very difficult to accurately predict what will happen for certain geometries.

    Composite Spring-In (3)

    In addition, other sources of defects include:

    • Surface scratches, depressions and dents
    • Delaminations between plies or voids
    • Material inclusion within the layup such as a ruler
    • Undercure or overcure (burning)
    • Tool drop or other impact events that can cause internal resin damage or delaminations

    In general most of these defects can be controlled by well-trained and highly motivated factory staff. Engineers and factory management should work together to ensure that all employees involved with the layup and curing process are aware of all possible sources of variability and how to mitigate these. In this respect detailed technical training entrusts more responsibility on the shoulders of employees and gives the staff the deserved recognition of being an important cog in the works of the company. Furthermore, the importance of a well-lit, comfortable working environment and positive atmosphere should not be understated and can go a long way to guaranteeing high-quality mouldings. A well-trained, highly motivated and happy staff is the first line of defence against poor parts.

    Next it is important to follow a concurrent design philosophy throughout the development process of a component. Thus the design, stress, manufacturing and quality control engineers must simultaneously work together in order to come up with a solution that fulfils all functional needs but can also be manufactured to a profit without unnecessary defects. The classical philosophy of separately designing a functional component, which is handed to the production engineers, makes manufacturing high-quality laminates incredibly difficult and will incur significant secondary costs.

    Finally, specific details of possible sources of variability can then be handled on a case-by-case basis. Thus the component’s shape and type of prepreg to be used will influence the mould material shape design; curing temperature and pressure; possible inclusion of slip lines and laminate stacking sequence as discussed above. In conclusion, manufacturing high-quality laminates for the aerospace industry is not an easy task and is even more daunting considering the size of the current all composite Boeing 787 Dreamliner and Airbus A350 XWB projects. Each design decision must be weighed against the influence on manufacturing process and every little detail is important!

    References

    (1) Potter, Kevin (1996). An Introduction to Composite Products: Design, Development and Manufacture. Springer, 5th Ed. Chapman & Hall, London.

    (2) http://en.wikipedia.org/wiki/File:Delamination-CFRP.jpg

    (3) Potter, Kevin (2011). Lecture 4. Basic Processes – Variability and defects. University of Bristol, Bristol.