Tag: Engineering

  • Rocket Science 101: Operating Principles

    In a previous post we covered the history of rocketry over the last 2000 years. By means of the Tsiolkovsky rocket equation we also established that the thrust produced by a rocket is equal to the mass flow rate of the expelled gases multiplied by their exit velocity. In this way, chemically fuelled rockets are much like traditional jet engines: an oxidising agent and fuel are combusted at high pressure in a combustion chamber and then ejected at high velocity. So the means of producing thrust are similar, but the mechanism varies slightly:

    • Jet engine: A multistage compressor increases the pressure of the air impinging on the engine nacelle. The compressed air is mixed with fuel and then combusted in the combustion chamber. The hot gases are expanded in a turbine and the energy extracted from the turbine is used to power the compressor. The mass flow rate and velocity of the gases leaving the jet engine determine the thrust.
    • Chemical rocket engine: A rocket differs from the standard jet engine in that the oxidiser is also carried on board. This means that rockets work in the absence of atmospheric oxygen, i.e. in space. The rocket propellants can be in solid form ignited directly in the propellant storage tank, or in liquid form pumped into a combustion chamber at high pressure and then ignited. Compared to standard jet engines, rocket engines have much higher specific thrust (thrust per unit weight), but are less fuel efficient.
    A turbojet engine [1]
    A liquid propellant rocket engine [1]

    In this post we will have a closer look at the operating principles and equations that govern rocket design. An introduction to rocket science if you will…

    The fundamental operating principle of rockets can be summarised by Newton’s laws of motion. The three laws:

    1. Objects at rest remain at rest and objects in motion remain at constant velocity unless acted upon by an unbalanced force.
    2. Force equals mass times acceleration (or F=maF=ma).
    3. For every action there is an equal and opposite reaction.

    are known to every high school physics student. But how exactly to they relate to the motion of rockets?

    Let us start with the two qualitative equations (the first and third laws), and then return to the more quantitative second law.

    Well, the first law simply states that to change the velocity of the rocket, from rest or a finite non-zero velocity, we require the action of an unbalanced force. Hence, the thrust produced by the rocket engines must be greater than the forces slowing the rocket down (friction) or pulling it back to earth (gravity). Fundamentally, Newton’s first law applies to the expulsion of the propellants. The internal pressure of the combustion inside the rocket must be greater than the outside atmospheric pressure in order for the gases to escape through the rocket nozzle.

    A more interesting implication of Newton’s first law is the concept escape velocity. As the force of gravity reduces with the square of the distance from the centre of the earth (Fgravity=GM1M2r2F_{gravity} = \frac{GM_1M_2}{r^2}), and drag on a spacecraft is basically negligible once outside the Earth’s atmosphere, a rocket travelling at 40,270 km/hr (or 25,023 mph) will eventually escape the pull of Earth’s gravity, even when the rocket’s engines have been switched off. With the engines switched off, the gravitational pull of earth is slowing down the rocket. But as the rocket is flying away from Earth, the gravitational pull is simultaneously decreasing at a quadratic rate. When starting at the escape velocity, the initial inertia of the rocket is sufficient to guarantee that the gravitational pull decays to a negligible value before the rocket comes to a standstill. Currently, the spacecraft Voyager 1 and 2 are on separate journeys to outer space after having been accelerated beyond escape velocity.

    At face value, Newton’s third law, the principle of action and reaction, is seemingly intuitive in the case of rockets. The action is the force of the hot, highly directed exhaust gases in one direction, which, as a reaction, causes the rocket to accelerate in the opposite direction. When we walk, our feet push against the ground, and as a reaction the surface of the Earth acts against us to propel us forward.

    So what does a rocket “push” against? The molecules in the surrounding air? But if that’s the case, then why do rockets work in space?

    The thrust produced by a rocket is a reaction to mass being hurled in one direction (i.e. to conserve momentum, more on that later) and not a result of the exhaust gases interacting directly with the surrounding atmosphere. As the rockets exhaust is entirely comprised of propellant originally carried on board, a rocket essentially propels itself by expelling parts of its mass at high speed in the opposite direction of the intended motion. This “self-cannibalisation” is why rockets work in the vacuum of space, when there is nothing to push against. So the rocket doesn’t push against the air behind it at all, even when inside the Earth’s atmosphere.

    Newton’s second law gives us a feeling for how much thrust is produced by the rocket. The thrust is equal to the mass of the burned propellants multiplied by their acceleration. The capability of rockets to take-off and land vertically is testament to their high thrust-to-weight ratios. Compare this to commercial jumbo or military fighter jets which use jet engines to produce high forward velocity, while the upwards lift is purely provided by the aerodynamic profile of the aircraft (fuselage and wings). Vertical take-off and landing (VTOL) aircraft such as the Harrier Jump jet are the rare exception.

    At any time during the flight, the thrust-to-weight ratio is equal to the acceleration of the rocket. From Newton’s second law, a=Fnet/ma = F_{net}/m, where FnetF_{net} is the net thrust of the rocket (engine thrust minus drag) and mm is the instantaneous mass of the rocket. As propellant is burned, the mass mm of the rocket decreases such that the highest accelerations of the rocket are achieved towards the end of a burn. On the flipside, the rocket is heaviest on the launch pad such that the engines have to produce maximum thrust to get the rocket away from the launch pad quickly (determined by the net acceleration Fnet/mgravityF_{net}/m -\text{gravity}).

    However, Newton’s second law only applies to each instantaneous moment in time. It does not allow us to make predictions of the rocket velocity as fuel is depleted. Mass is considered to be constant in Newton’s second law, and therefore it does not account for the fact that the rocket accelerates more as fuel inside the rocket is depleted.

    The rocket equation

    The Tsiolkovsky rocket equation, however, takes this into account. The motion of the rocket is governed by the conservation of momentum. When the rocket and internal gases are moving as one unit, the overall momentum, the product of mass and velocity, is equal to P1P_1. Thus, for a total mass of rocket and gas m=mr+mgm=m_r+m_g moving at velocity vv

    mv=(mr+mg)v=P1mv = \left(m_r + m_g\right)v = P_1

    As the gases are expelled through the rear of the rocket, the overall momentum of the rocket and fuel has to remain constant as long as no external forces act on the system. Thus, if a very small amount of gas dm\mathrm{d}m is expelled at velocity vev_e relative to the rocket (either in the direction of vv or in the opposite direction), the overall momentum of the system (sum of rocket and expelled gas) is

    (mdm)(v+dvr)+dm(v+ve)=P2\left(m – \mathrm{d}m\right) \left(v+\mathrm{d}v_r\right) + \mathrm{d}m \left(v + v_e\right) = P_2

    As P2P_2 has to equal P1P_1 to conserve momentum

    mv=(mdm)(v+dvr)+dm(v+ve)mv = \left(m – \mathrm{d}m\right) \left(v+\mathrm{d}v_r\right) + \mathrm{d}m \left(v + v_e\right)

    and by isolating the change in rocket velocity dvr\mathrm{d}v_r

    (mdm)dvr=vedm\left(m-\mathrm{d}m\right) \mathrm{d}v_r = -v_e\mathrm{d}m
    dvr=dm(mdm)ve\therefore dv_r = -\frac{\mathrm{d}m}{\left(m-\mathrm{d}m\right)} v_e

    The negative sign in the equation above indicates that the rocket always changes velocity in the opposite direction of the expelled gas, as intuitively expected. So if the gas is expelled in the opposite direction of the rocket motion vv (so vev_e is negative), then the change in the rocket velocity will be positive and it will accelerate.

    At any time tt the quantity M=mdmM = m-\mathrm{d}m is equal to the residual mass of the rocket (dry mass + propellant) and dm=dM\mathrm{d}m = \mathrm{d}M denotes it change. If we assume that the expelled velocity of the gas remains constant throughout, we can easily find the incremental change in velocity as the rocket changes from an initial mass MoM_o to a final mass MfM_f. So,

    Δv=MoMfvedMM=velnM|MoMf=ve(lnMolnMf)=velnMoMf\Delta v = -\int_{M_o}^{M_f} v_e \frac{\mathrm{d}M}{M} = -v_e \ln M\left.\right|^{M_f}_{M_o} = v_e \left(\ln M_o – \ln M_f\right) = v_e \ln \frac{M_o}{M_f}

    This equation is known as the Tsiolkovsky rocket equation and is applicable to any body that accelerates by expelling part of its mass at a specific velocity. Even though the expulsion velocity may not remain constant during a real rocket launch we can refer to an effective exhaust velocity that represent a mean value over the course of the flight.

    The Tsiolkovsky rocket equation shows that the change in velocity attainable is a function of the exhaust jet velocity and the ratio of original take-off mass (structural weight + fuel = MoM_o) to its final mass (structural mass + residual fuel = MfM_f). If all of the propellant is burned, the mass ratio expresses how much of the total mass is structural mass, and therefore provides some insight into the efficiency of the rocket.

    In a nutshell, the greater the ratio of fuel to structural mass, the more propellant is available to accelerate the rocket and therefore the greater the maximum velocity of the rocket.

    So in the ideal case we want a bunch of highly reactant chemicals magically suspended above an ultralight means of combusting said fuel.

    In reality this means we are looking for a rocket propelled by a fuel with high efficiency of turning chemical energy into kinetic energy, contained within a lightweight tankage structure and combusted by a lightweight rocket engine. But more on that later!

    Thrust

    Often, we are more interested in the thrust created by the rocket and its associated acceleration ara_r. By dividing the rocket equation above by a small time increment dt\mathrm{d}t and again assuming vev_e to remain constant

    ar=dvrdt=dMdtveM=M˙Mvea_r = \frac{\mathrm{d}v_r}{\mathrm{d}t} = – \frac{\mathrm{d}M}{\mathrm{d}t} \frac{v_e}{M} = \frac{\dot{M}}{M} v_e

    and the associated thrust FrF_r acting on the rocket is

    Fr=Mar=M˙veF_r = Ma_r = \dot{M} v_e

    where M˙\dot{M} is the mass flow rate of gas exiting the rocket. If the differences in exit pressure of the combustion gases and surrounding ambient pressure are accounted for this becomes:

    Fr=M˙ve+(pepambient)AeF_r = \dot{M} v_e + (p_e – p_{ambient}) A_e

    where vev_e is the jet velocity at the nozzle exit plane, AeA_e is the flow area at the nozzle exit plane, i.e. the cross-sectional area of the flow where it separates from the nozzle, pep_e is the static pressure of the exhaust jet at the nozzle exit plane and pambientp_{ambient} the pressure of the surrounding atmosphere.

    This equation provides some additional physical insight. The term M˙ve\dot{M} v_e is the momentum thrust which is constant for a given throttle setting. The difference in gas exit and ambient pressure multiplied by the nozzle area provides additional thrust known as pressure thrust. With increasing altitude the ambient pressure decreases, and as a result, the pressure thrust increases. So rockets actually perform better in space because the ambient pressure around the rocket is negligibly small. However, AeA_e also decreases in space as the jet exhaust separates earlier from the nozzle due to overexpansion of the exhaust jet. For now it will suffice to say that pressure thrust typically increases by around 30% from launchpad to leaving the atmosphere, but we will return to physics behind this in the next post.

    Impulse and specific impulse

    The overall amount of thrust is typically not used as an indicator for rocket performance. Better indicators of an engine’s performance are the total and specific impulse figures. Ignoring any external forces (gravity, drag, etc.) the impulse is equal to the change in momentum of the rocket (mass times velocity) and is therefore a better metric to gauge how much mass the rocket can propel and to what maximum velocity. For a change in momentum Δp\Delta p the impulse is

    I=Δp=Δ(mv)=Δ(Fav)=FaverageΔtI = \Delta p = \Delta (mv) = \Delta\left(\frac{F}{a}v\right) = F_{average} \Delta t

    So to maximise the impulse imparted on the rocket we want to maximise the amount of thrust FF acting over the burn interval Δt\Delta t. If the burn period is broken into a number of finite increments, then the total impulse is given by

    I=n=1endFnΔtnI = \sum_{n=1}^{end} F_n \Delta t_n

    Therefore, impulse is additive and the total impulse of a multistage rocket is equal to the sum of the impulse imparted by each individual stage.

    By specific impulse we mean the net impulse imparted by a unit mass of propellant. It’s the efficiency with which combustion of the propellant can be converted into impulse. The specific impulse is therefore a metric related to a specific propellant system (fuel + oxidiser) and essentially normalises the exhaust velocity by the acceleration of gravity that it needs to overcome:

    Isp=ve/gI_{sp} = v_e/g

    where vev_e is the effective exhaust velocity and gg=9.81. Different fuel and oxidiser combinations have different values of IspI_{sp} and therefore different exhaust velocities.

    A typical liquid hydrogen/liquid oxygen rocket will achieve an IspI_{sp} around 450 s with exhaust velocities approaching 4500 m/s, whereas kerosene and liquid oxygen combinations are slightly less efficient with IspI_{sp} around 350 s and vev_e around 3500 m/s. Of course, a propellant with higher values of IspI_{sp} is more efficient as more thrust is produced per unit of propellant.

    Delta-v and mass ratios

    The Tsiolkovsky rocket equation can be used to calculate the theoretical upper limit in total velocity change, called delta-v, for a certain amount of propellant mass burn at a constant exhaust velocity vev_e. At an altitude of 200 km an object needs to travel at 7.8 km/s to inject into low earth orbit (LEO). If we start from rest, this means a delta-v equal to 7.8 km/s. Accounting for frictional losses and gravity, the actual requirement rocket scientists need to design for is just shy of delta-v=10 km/s. So assuming a lower bound effective exhaust velocity of 3500 m/s, we require a mass ratio of…

    Δv=|ve|lnM0MflnM0Mf=100003500=2.857\Delta v = \left|v_e\right| \ln \frac{M_0}{M_f} \Rightarrow \ln \frac{M_0}{M_f} = \frac{10000}{3500}=2.857
    M0Mf=e2.86=17.4\therefore \frac{M_0}{M_f} = e^{2.86} = \underline{17.4}

    to reach LEO. This means that the original rocket on the launch pad is 17.4 times heavier than when all the rocket fuel is depleted!

    Just to put this into perspective, this means that the mass of fuel inside the rocket is SIXTEEN times greater than the dry structural mass of tanks, payload, engine, guidance systems etc. That’s a lot of fuel!

    Delta-Vs for inner Solar System
    Delta-v figures required for rendezvous in the solar system. Note the delta-v to get to the Moon is approximately 10 + 4.1 + 0.7 + 1.6 = 16.4 km/s and thus requires a whopping mass ratio of 108.4 at an effective exhaust velocity of 3500 m/s.

    The rocket’s initial mass to its final mass

    M0Mf=eΔv/ve\frac{M_0}{M_f} = e^{\Delta v / v_e}

    is known as the mass ratio. In some cases, the reciprocal of the mass ratio is used to calculate the mass fraction:

    Mass fraction=1(M0Mf)1\text{Mass fraction} = 1 – \left(\frac{M_0}{M_f}\right)^{-1}

    The mass fraction is necessarily always smaller than 1, and in the above case is equal to 117.41=94.31 – 17.4^{-1} = 94.3.

    So 94% of this rocket’s mass is fuel!

    Such figures are by no means out of the ordinary. In fact, the Space Shuttle had a mass ratio in this ballpark (15.4 = 93.5% fuel) and Europe’s Ariane V rocket has a mass ratio of 39.9 (97.5% fuel).

    If anything, flying a rocket means being perched precariously on top of a sea of highly explosive chemicals!

    The reason for the incredibly high amount of fuel is the exponential term in the above equation. The good thing is that adding fuel means we have an exponential law working in our favour: For each extra gram of fuel we can pack into the rocket we get a superlinear (better than linear) increase in delta-v. On the downside, for every piece of extra equipment, e.g. payload, we stick into the rocket we get an equally exponential reduction in delta-v.

    In reality, the situation is obviously more complex. The point of a rocket is to carry a certain payload into space and the distance we want to travel is governed by a specific amount of delta-v  (see figure to the right). For example, getting to the Moon requires a delta-v of approximately 16.4 km/s which implies a whopping mass ratio of 108.4. Therefore, if we wish to increase the payload mass, we need to simultaneously increase propellant mass to keep the mass ratio at 108.4. However, increasing the amount of fuel increases the loads acting on the rocket, and therefore more structural mass is required to safely get the rocket to the Moon. Of course, increasing structural mass similarly increases our fuel requirement, and off we go on a nice feedback loop…

    This simple example explains why the mass ratio is a key indicator of a rocket’s structural efficiency. The higher the mass ratio the greater the ratio of delta-v producing propellant to non-delta-v producing structural mass. All other factors being equal, this suggests that a high mass ratio rocket is more efficient because less structural mass is needed to carry a set amount of propellant.

    The optimal rocket is therefore propelled by high specific impulse fuel mixture (for high exhaust velocity), with minimal structural requirements to contain the propellant and resist flight loads, and minimal requirements for additional auxiliary components such as guidance systems, attitude control, etc.

    For this reason, early rocket stages typically use high-density propellants. The higher density means the propellants take up less space per unit mass. As a result, the tank structure holding the propellant is more compact as well. For example, the Saturn V rocket used the slightly lower specific impulse combination of kerosene and liquid oxygen for the first stage, and the higher specific impulse propellants liquid hydrogen and liquid oxygen for later stages.

    Closely related to this, is the idea of staging. Once, a certain amount of fuel within the tanks has been used up, it is beneficial to shed the unnecessary structural mass that was previously used to contain the fuel but is no longer contributing to delta-v. In fact, for high delta-v missions, such as getting into orbit, the total dry-mass of the rockets we use today is too great to be able to accelerate to the desired delta-v. Hence, the idea of multi-stage rockets. We connect multiple rockets in stages, incrementally discarding those parts of the structural mass that are no longer needed, thereby increasing the mass ratio and delta-v capacity of the residual pieces of the rocket.

    Cost

    The cost of getting a rocket on to the launch pad can roughly be split into three components:

    1. Propellant cost.
    2. Cost of dry mass, i.e. rocket casing, engines and auxiliary units.
    3. Operational and labour costs.

    As we saw in the last section, more than 90% of a rocket take-off mass is propellant. However, the specific cost (cost per kg) of the propellants is multiple orders of magnitude smaller than the cost per unit mass of the rocket dry mass mass, i.e. the raw material costs and operational costs required to manufacture and test them. A typical propellant combination of kerosene and liquid oxygen costs around $2/kg, whereas the dry mass cost of an unmanned orbital vehicle is at least $10,000/kg. As a result, the propellant cost of flying into low earth orbit is basically negligible.

    The incredibly high dry mass costs are not necessarily because the raw material, predominantly high-grade aerospace metals, are prohibitively expense, rather they cannot be bought at scale because of the limited number of rockets being manufactured. Second, the criticality of reducing structural mass for maximising delta-v means that very tight safety factors are employed. Operating a tight safety factor design philosophy while ensuring sufficient safety and reliability standards under the extreme load conditions exerted on the rocket means that manufacturing standards and quality control measures are by necessity state-of-the-art. Such procedures are often highly specialised technologies that significantly drive up costs.

    To clear these economic hurdles, some have proposed to manufacture simple expendable rockets at scale, while others are focusing on reusable rockets. The former approach will likely only work for unmanned smaller rockets and is being pursued by companies such as Rocket Lab Ltd. The Space Shuttle was an attempt at the latter approach that did not live up to its potential. The servicing costs associated with the reusable heat shield were unexpectedly high and ultimately forced the retirement of the Shuttle. Most, recently Elon Musk and SpaceX have picked up the ball and have successfully designed a fully reusable first stage.


    The principles outlined above set the landscape of what type of rocket we want to design. Ideally, a high specific impulse chemicals suspended in a lightweight yet strong tankage structure above an efficient means of combustion.

    Some of the more detailed questions rocket engineers are faced with are:

    • What propellants to use to do the job most efficiently and at the lowest cost?
    • How to expel and direct the exhaust gases most efficiently?
    • How to control the reaction safely?
    • How to minimise the mass of the structure?
    • How to control the attitude and accuracy of the rocket?

    We will address these questions in the next part of this series.

    References

    [1] Rolls-Royce plc (1996). The Jet Engine. Fifth Edition. Derby, England.

  • The History of Rocket Science

    Rocket technology has evolved for more than 2000 years. Today’s rockets are a product of a long tradition of ingenuity and experimentation, and combine technical expertise from a wide array of engineering disciplines. Very few, if any, of humanity’s inventions are designed to withstand equally extreme conditions. Rockets are subjected to awesome g-forces at lift-off, and experience extreme hot spots in places where aerodynamic friction acts most strongly, and extreme cold due to liquid hydrogen/oxygen at cryogenic temperatures. Operating a rocket is a balance act, and the line between a successful launch and catastrophic blow-out is often razor thin. No other engineering system rivals the complexity and hierarchy of technologies that need to interface seamlessly to guarantee sustained operation. It is no coincidence that “rocket science” is the quintessential cliché to describe the mind-blowingly complicated.

    Fortunately for us, we live in a time where rocketry is undergoing another golden period. Commercial rocket companies like SpaceX and Blue Origin are breathing fresh air into an industry that has traditionally been dominated by government-funded space programs. But even the incumbent companies are not resting on their laurels, and are developing new powerful rockets for deep-space exploration and missions to Mars. Recent blockbuster movies such as Gravity, Interstellar and The Martian are an indication that space adventures are once again stirring the imagination of the public.

    What better time than now to look back at the past 2000 years of rocketry, investigate where past innovation has taken us and look ahead to what is on the horizon? It’s certainly impossible to cover all of the 51 influential rockets in the chart below but I will try my best to provide a broad brush stroke of the early beginnings in China to the Space Race and beyond.

    51 influential rockets ordered by height. Created by Tyler Skrabek
    51 influential rockets ordered by height. Created by Tyler Skrabek

    The history of rocketry can be loosely split into two eras. First, early pre-scientific tinkering and second, the post-Enlightenment scientific approach. The underlying principle of rocket propulsion has largely remained the same, whereas the detailed means of operation and our approach to developing rocketry has changed a great deal.

    The fundamental principle of rocket propulsion, spewing hot gases through a nozzle to induce motion in the opposite direction, is nicely illustrated by two historic examples. The Roman writer Aulus Gellius tells a story of Archytas, who, sometime around 400 BC, built a flying pigeon out of wood. The pigeon was held aloft by a jet of steam or compressed air escaping through a nozzle. Three centuries later, Hero of Alexandria invented the aeolipile based on the same principle of using escaping steam as a propulsive fluid. In the aeolipile, a hollow sphere was connected to a water bath via tubing, which also served as a primitive type of bearing, suspending the sphere in mid-air. A fire beneath the water basin created steam which was subsequently forced to flow into the sphere via the connected tubing. The only way for the gas to escape was through two L-shaped outlets pointing in opposite directions. The escaping steam induced a moment about the hinged support effectively rotating the sphere about its axis.

    In both these examples, the motion of the device is governed by the conservation of momentum. When the rocket and internal gases are moving as one unit, the overall momentum, the product of mass and velocity, is equal to P1P_1. Thus for a total mass of rocket and gas, m=mr+mgm=m_r+m_g, moving at velocity vv,

    mv=(mr+mg)v=P1mv = \left(m_r + m_g\right)v = P_1

    As the gases are expelled through the rear of the rocket, the overall momentum of the rocket and fuel has to remain constant as long as no external forces are acting on the system. Thus, if a very small amount of gas dm\mathrm{d}m is expelled at velocity vev_e relative to the rocket (either in the direction of vv or in the opposite direction), the overall momentum of the system is

    (mdm)(v+dvr)+dm(v+ve)=P2\left(m – \mathrm{d}m\right) \left(v+\mathrm{d}v_r\right) + \mathrm{d}m \left(v + v_e\right) = P_2

    As P2P_2 has to equal P1P_1 to conserve momentum

    mv=(mdm)(v+dvr)+dm(v+ve)mv = \left(m – \mathrm{d}m\right) \left(v+\mathrm{d}v_r\right) + \mathrm{d}m \left(v + v_e\right)

    and by isolating the change in rocket velocity dvr\mathrm{d}v_r

    (mdm)dvr=vedm\left(m-\mathrm{d}m\right) \mathrm{d}v_r = -v_e\mathrm{d}m
    dvr=dm(mdm)ve\therefore dv_r = -\frac{\mathrm{d}m}{\left(m-\mathrm{d}m\right)} v_e

    The negative sign in the equation above indicates that the rocket always changes velocity in the opposite direction of the expelled gas. Hence, if the gas is expelled in the opposite direction of the motion vv (i.e. vev_eis negative), then the change in the rocket velocity will be positive (i.e. it will accelerate).

    At any time tt the quantity M=mdmM = m-\mathrm{d}m is equal to the residual mass of the rocket (dry mass + propellant) and dm=dM\mathrm{d}m = \mathrm{d}M denotes it change. If we assume that the expelled velocity of the gas remains constant throughout, we can easily integrate the above expression to find the incremental change in velocity as the total rocket mass (dry mass + propellant) changes from an intial mass MoM_o to a final mass MfM_f. Hence,

    Δv=MoMfvedMM=velnM|MoMf=ve(lnMolnMf)=velnMoMf\Delta v = \int_{M_o}^{M_f} -v_e \frac{\mathrm{d}M}{M} = -v_e \ln M\left.\right|^{M_f}_{M_o} = v_e \left(\ln M_o – \ln M_f\right) = v_e \ln \frac{M_o}{M_f}

    This equation is known as the Tsiolkovsky rocket equation (more on him later) and is applicable to any body that accelerates by expelling part of its mass at a specific velocity.

    Often, we are more interested in the thrust created by the rocket and its associated acceleration ara_r. Hence, by dividing the equation for dvrdv_r by a small time increment dtdt

    ar=dvrdt=dMdtveM=M˙Mvea_r = \frac{\mathrm{d}v_r}{\mathrm{d}t} = – \frac{\mathrm{d}M}{\mathrm{d}t} \frac{v_e}{M} = \frac{\dot{M}}{M} v_e

    and the associated thrust FrF_r acting on the rocket is

    Fr=Mar=M˙veF_r = Ma_r = \dot{M} v_e

    where M˙\dot{M} is the mass flow rate of gas exiting the rocket. This simple equation captures the fundamental physics of rocket propulsion. A rocket creates thrust either by expelling more of its mass at a higher rate (M˙\dot{M}) or by increasing the velocity at which the mass is expelled. In the ideal case that’s it! (So by idealised we mean constant vev_e and no external forces, e.g. aerodynamic drag in the atmosphere or gravity. In actual calculations of the required propellant mass these forces and other efficiency reducing factors have to be included.)

    Graph of the Tsiolkovsky rocket equation
    Graph of the Tsiolkovsky rocket equation

    A plot of the rocket equation highlights one of the most pernicious conundrums of rocketry: The amount of fuel required (i.e. the mass ratio Mo/MfM_o/M_f) to accelerate the rocket through a velocity change Δv\Delta v at a fixed effective escape velocity vev_e increases exponentially as we increase the demand for greater Δv\Delta v. As the cost of a rocket is closely related to its mass, this explains why it is so expensive to propel anything of meaningful size into orbit (Δv\Delta v \approx 28,800 km/hr (18,000 mph) for low-earth orbit).

    The early beginnings


    Drawing of a Chinese rocket and launching mechanism

    The wood pigeon and aeolipile do not resemble anything that we would recognise as a rocket. In fact, the exact date when rockets first appeared is still unresolved. Records show that the Chinese developed gunpowder, a mixture of saltpetre, sulphur and charcoal dust, at around 100 AD. Gunpowder was used to create colourful sparks, smoke and explosive devices out of hollow bamboo sticks, closed off at one end, for religious festivals. Perhaps some of these bamboo tubes started shooting off or skittering along the ground, but the Chinese started tinkering with the gunpowder-filled bamboo sticks and attached them to arrows. Initially the arrows were launched in the traditional way using bows, creating a form of early incendiary bomb, but later the Chinese realised that the bamboo sticks could launch themselves just by the thrust produced by the escaping hot gases.

    The first documented use of such a “true” rocket was during the battle of Kai-Keng between the Chinese and Mongols in 1232. During this battle the Chinese managed to hold the Mongols at bay using a primitive form a solid-fueled rocket. A hollow tube was capped at one end, filled with gunpowder and then attached to a long stick. The ignition of the gunpowder increased the pressure inside the hollow tube and forced some of the hot gas and smoke out through the open end. As governed by the law of conservation of momentum, this creates thrust to propel the rocket in the direction of the capped end of the tube, with the long stick acting as a primitive guidance system, very much reminiscent of the firework “rockets” we use today.

    Wan Hu (the man in the moon?) and his rocket chair
    Wan Hu (the man in the moon?) and his rocket chair

    According to a Chinese legend, Wan Hu, a local official during the 16th century Ming dynasty, constructed a chair with 47 gunpowder bamboo rockets attached, and in some versions of the legend supposedly fitted kite wings as well. The rocket chair was launched by igniting all 47 bamboo rockets simultaneously, and apparently, after the commotion was over, Wan Hu was gone. Some say he made it into space, and is now the “Man in the Moon”. Most likely, Wan Hu suffered the first ever launch pad failure.

    One theory is that rockets were brought to Europe via the 13th century Mongol conquests. In England, Roger Bacon developed a more powerful gunpowder (75% saltpetre, 15% carbon and 10% sulfur) that increased the range of rockets, while Jean Froissart added a launch pad by launching rockets through tubes to improve aiming accuracy. By the Renaissance, the use of rockets for weaponry fell out of fashion and experimentation with fireworks increased instead. In the late 16th century, a German tinkerer, Johann Schmidlap, experimented with staged rockets, an idea that is the basis for all modern rockets. Schmidlap fitted a smaller second-stage rocket on top of a larger first-stage rocket, and once the first stage burned out, the second stage continued to propel the rocket to higher altitudes. At about the same time, Kazimierz Siemienowicz, a Polish-Lithuanian commander in the Polish Army published a manuscript that included a design for multi-stage rockets and delta-wing stabilisers that were intended to replace the long rods currently acting as stabilisers.

    The scientific method meets rocketry

    The scientific groundwork of rocketry was laid during the Enlightenment by none other than Sir Isaac Newton. His three laws of motion,

    1) In a particular reference frame, a body will stay in a state of constant velocity (moving or at rest) unless a net force is acting on the body
    2) The net force acting on a body causes an acceleration that is proportional to the body’s inertia (mass), i.e. F=maF=ma
    3) A force exerted by one body on another induces an equal an opposite reaction force on the first body

    are known to every student of basic physics. In fact, these three laws were probably intuitively understood by early rocket designers, but by formalising the principles, they were consciously being used as design guidelines. The first law explains why rockets move at all. Without creating propulsive thrust the rocket will remain stationary. The second quantifies the amount of thrust produced by a rocket at a specific instant in time, i.e. for a specific mass MM. (Note, Newton’s second law is only valid for constant mass systems and is therefore not equivalent to the conservation of momentum approach described above. When mass varies, an equation that explicitly accounts for the changing mass has to be used.) The third law explains that due to the expulsion of mass, in re-action a thrusting force is produced on rocket.

    In the 1720s, at around the time of Newton’s death, researchers in the Netherlands, Germany and Russia started to use Newton’s laws as tools in the design of rockets. The dutch professor Willem Gravesande built rocket-propelled cars by forcing steam through a nozzle. In Germany and Russia rocket designers started to experiment with larger rockets. These rockets were powerful enough that the hot exhaust flames burnt deep holes into the ground before launching. The British colonial wars of 1792 and 1799 saw the use of Indian rocket fire against the British army. Hyder Ali and his son Tipu Sultan, the rulers of the Kingdom of Mysore in India, developed the first iron-cased rockets in 1792 and then used it against the British in the Anglo-Mysore Wars.

    Casing the propellant in iron, which extended range and thrust, was more advanced technology than anything the British had seen until then, and inspired by this technology, the British Colonel William Congreve began to design his own rocket for the British forces. Congreve developed a new propellant mixture and fitted an iron tube with a conical nose to improve aerodynamics. Congreve’s rockets had an operational range of up to 5 km and were successfully used by the British in the Napoleonic Wars and launched from ships to attack Fort McHenry in the War of 1812. Congreve created both carbine ball-filled rockets to be used against land targets, and incendiary rockets to be used against ships. However, even Congreve’s rockets could not significantly improve on the main shortcomings of rockets: accuracy.

    A selection of Congreve rockets (Wikimedia Commons).
    A selection of Congreve rockets

    At the time, the effectiveness of rockets as a weapon was not their accuracy or explosive power, but rather the sheer number that could be fired simultaneously at the enemy. The Congreve rockets had managed some form of basic attitude control by attaching a long stick to the explosive, but the rockets had a tendency to veer sharply off course. In 1844, a British designer, William Hale developed spin stabilisation, now commonly used in gun barrels, which removed the need for the rocket stick. William Hale forced the escaping exhaust gases at the rear of the rocket to impinge on small vanes, causing the rocket to spin and stabilise (the same reason that a gyroscope remains upright when spun on a table top). The use of rockets in war soon took a back seat once again when the Prussian army developed the breech-loading cannon with exploding warheads that proved far superior than the best rockets.

    The era of modern rocketry

    Soon, new applications for rockets were being imagined. Jules Verne, always the visionary, put the dream of space flight into words in his science-fiction novel “De la Terre á la Lune” (From the Earth to the Moon), in which a projectile, named Columbiad, carrying three passengers is shot at the moon using a giant cannon. The Russian schoolteacher Konstantin Tsiolkovsky (of rocket equation fame) proposed the idea of using rockets as a vehicle for space exploration but acknowledged that the main bottlenecks of achieving such a feat would require significant developments in the range of rockets. Tsiolkovsky understood that the speed and range of rockets was limited by the exhaust velocity of the propellant gases. In a 1903 report, “Research into Interplanetary Space by Means of Rocket Power”, he suggested the use of liquid-propellants and formalised the rocket equation derived above, relating the rocket engine exhaust velocity to the change in velocity of the rocket itself (now known as the Tsiolkovsky rocket equation in his honour, although it had already been discovered previously).

    Tsiolkovsky also advocated the development of orbital space stations, solar energy and the colonisation of the Solar System. One of his quotes is particularly prescient considering Elon Musk’s plans to colonise Mars:

    “The Earth is the cradle of humanity, but one cannot live in the cradle forever” — In a letter written by Tsiolkovsky in 1911.

    The American scientist Robert H. Goddard, now known as the father of modern rocketry, was equally interested in extending the range of rockets, especially reaching higher altitudes than the gas balloons used at the time. In 1919 he published a short manuscript entitled “A Method of Reaching Extreme Altitudes” that summarised his mathematical analysis and practical experiments in designing high altitude rockets. Goddard proposed three ways of improving current solid-fuel technology. First, combustion should be contained to a small chamber such that the fuel container would be subjected to much lower pressure. Second, Goddard advocated the use of multi-stage rockets to extend their range, and third, he suggested the use of a supersonic de Laval nozzle to improve the exhaust speed of the hot gases.

    Goddard started to experiment with solid-fuel rockets, trying various different compounds and measuring the velocity of the exhaust gases. As a result of this work, Goddard was convinced of Tsiolkovsky’s early premonitions that a liquid-propellant would work better. The problem that Goddard faced was that liquid-propellant rockets were an entirely new field of research, no one had ever built one, and the system required was much more complex than for a solid-fuelled rocket. Such a rocket would need separate tanks and pumps for the fuel and oxidiser, a combustion chamber to combine and ignite the two, and a turbine to drive the pumps (much like the turbine in a jet engine drives the compressor at the front). Goddard also added a de Laval nozzle which cooled the hot exhaust gases into a hypersonic, highly directed jet, more than doubling the thrust and increasing engine efficiency from 2% to 64%! Despite these technical challenges, Goddard designed the first successful liquid-fuelled rocket, propelled by a combination of gasoline as fuel and liquid oxygen as oxidiser, and tested it on March 16, 1926. The rocket remained lit for 2.5 seconds and reached an altitude of 12.5 meters. Just like the first 40 yard flight of the Wright brothers in 1903, this feat seems unimpressive by today’s standards, but Goddard’s achievements put rocketry on an exponential growth curve that led to radical improvements over the next 40 years. Goddard himself continued to innovate; his rockets flew to higher and higher altitudes, he added a gyroscope system for flight control and introduced parachute recovery systems.

    On the other side of the Atlantic, German scientists were beginning to play a major role in the development of rockets. Inspired by Hermann Oberth’s ideas on rocket travel, the mathematics of spaceflight and the practical design of rockets published in his book “Die Rakete zu den Planetenraumen” (The Rocket to Space), a number of rocket societies and research institutes were founded in Germany. The German bicycle and car manufacturer Opel (now part of GM) began developing rocket powered cars, and in 1928 Fritz von Opel drove the Opel-RAK.1 on a racetrack. In 1929 this design was extended to the Opel-Sander RAK 1-airplane, which crashed during its first flight in Frankfurt. In the Soviet Union, the Gas dynamics Laboratory in Leningrad under the directorship of Valentin Glushko built more than 100 different engine designs, experimenting with different fuel injection techniques.

    A cross-section of the V2 rocket (Wikimedia Commons).
    A cross-section of the V-2 rocket

    Under the directorship of Wernher von Braun and Walter Dornberger, the Verein for Raumschiffahrt or Society for Space Travel played a pivotal role in the development of the Vergeltungswaffe 2, also known as the V-2 rocket, the most advanced rocket of its time. The V-2 rocket burned a mixture of alcohol as fuel and liquid oxygen as oxidiser, and it achieved great amounts of thrust by considerably improving the mass flow rate of fuel to about 150 kg (380 lb) per second. The V-2 featured much of the technology we see on rockets today, such as turbo pumps and guidance systems, and due to its range of around 300 km (190 miles), the V-2 could be launched from the shores of the Baltic to bomb London during WWII. The 1000 kg (2200 lb) explosive warhead fitted in the tip of the V-2 was capable of devastating entire city blocks, but still lacked the accuracy to reliably hit specific targets. Towards the end of WWII, German scientists were already planning much larger rockets, today known as Intercontinental Ballistic Missiles (ICBMs), that could be used to attack the United States, and were strapping rockets to aircraft either for powering them or for vertical take-off.

    With the fall of the Third Reich in April 1945 a lot of this technology fell into the hands of the Allies. The Allies’ rocket program was much less sophisticated such that a race ensued to capture as much of the German technology as possible. The Americans alone captured 300 train loads of V-2 rocket parts and shipped them back to the United States. Furthermore, the most prominent of the German rocket scientists emigrated to the United States, partly due to the much better opportunities to develop rocketry there, and partly to escape the repercussions of having played a role in the Nazi war machine. The V-2 essentially evolved into the American Redstone rocket which was used during the Mercury project.

    The Space Race – to the moon and beyond

    After WWII both the United States and the Soviet Union began heavily funding research into ICBMs, partly because these had the potential to carry nuclear warheads over long distances, and partly due to the allure of being the first to travel to space. In 1948, the US Army combined a captured V-2 rocket with a WAC Corporal rocket to build the largest two-stage rocket to be launched in the United States. This two-stage rocket was known as the “Bumper-WAC”, and over course of six flights reached a peak altitude of 400 kilometres (250 miles), pretty much exactly to the altitude where the International Space Station (ISS) orbits today.

    Semyorka Rocket R7 by Sergei Korolyov in VDNH Ostankino RAF0540
    The Vostok rocket based on the R-7 ICBM

    Despite these developments the Soviets were the first to put a man-made object orbit into space, i.e. an artificial satellite. Under the leadership of chief designer Sergei Korolev, the V-2 was copied and then improved upon in the R-1, R-2 and R-5 missiles. At the turn of 1950s the German designs were abandoned and replaced with the inventions of Aleksei Mikhailovich Isaev which was used as the basis for the first Soviet ICBM, the R-7. The R-7 was further developed into the Vostok rocket which launched the first satellite, Sputnik I, into orbit on October 4, 1957, a mere 12 years after the end of WWII. The launch of Sputnik I was the first major news story of the space race. Only a couple of weeks later the Soviets successfully launched Sputnik II into orbit with dog Laika onboard.

    One of the problems that the Soviets did not solve was atmospheric re-entry. Any object wishing to orbit another planet requires enough speed such that the gravitational attraction towards the planet is offset by the curvature of planet’s surface. However, during re-entry, this causes the orbiting body to literally smash into the atmosphere creating incredible amounts of heat. In 1951, H.J. Allen and A.J. Eggers discovered that a high drag, blunted shape, not a low-drag tear drop, counter-intuitively minimises the re-entry effects by redirecting 99% of the energy into the surrounding atmosphere. Allen and Eggers’ findings were published in 1958 and were used in the Mercury, Gemini, Apollo and Soyuz manned space capsules. This design was later improved upon in the Space Shuttle, whereby a shock wave was induced on the heat shield of the Space Shuttle via an extremely high angle of attack, in order to deflect most of the heat away from the heat shield.

    The United States’ first satellite, Explorer I, would not follow until January 31, 1958. Explorer I weighed about 30 times less than the Sputnik II satellite, but the Geiger radiation counters on the satellite were used to make the first scientific discovery in outer space, the Van Allen Radiation Belts. Explorer I had originally been developed as part of the US Army, and in October 1958 the National Advisory Committee for Aeronautics (NACA, now NASA) was officially formed to oversee the space program. Simultaneously, the Soviets developed the Vostok, Soyuz and Proton family of rockets from the original R-7 ICBM to be used for the human spaceflight programme. In fact, the Soyuz rocket is still being used today, is the most frequently used and reliable rocket system in history, and after the Space Shuttle’s retirement in 2011 became the only viable means of transport to the ISS. Similarly, the Proton rocket, also developed in the 1960s, is still being used to haul heavier cargo into low-Earth orbit.

    The Soyuz rocket in transport to the launch site
    The Soyuz rocket in transport to the launch site

    Shortly after these initial satellite launches, NASA developed the experimental X-15 air-launched rocket-propelled aircraft, which, in 199 flights between 1959 and 1968, broke numerous flying records, including new records for speed (7,274 kmh or 4,520 mph) and altitude records (108 kmh or 67 miles). The X-15 also provided NASA with data regarding the optimal re-entry angles from space into the atmosphere.

    The next milestone in the space race once again belonged to the Soviets. On April 12, 1961, the cosmonaut Yuri Gagarin became the first human to travel into space, and as a result became an international celebrity. Over a period of just under two hours, Gagarin orbited the Earth inside a Vostok 1 space capsule at around 300 km (190 miles) altitude, and after re-entry into the atmosphere ejected at an altitude of 6 km (20,000 feet) and parachuted to the ground. At this point Gagarin became the most famous Soviet on the planet, travelling around the world as a beacon of Soviet success and superiority over the West.

    Shortly after Gagarin’s successful flight, the American astronaut Alan Shepherd reached a suborbital altitude of 187 km (116 miles) in the Freedom 7 Mercury capsule. The Redstone ICBM that was used to launch Shephard from Cape Caneveral did not quite have the power to send the Mercury capsule into orbit, and had suffered a series of embarrassing failures prior to the launch, increasing the pressure on the US rocket engineers. However, days after Shephard’s flight, President John F. Kennedy delivered the now famous words before a joint session in Congress

    “This nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to the Earth.”

    Despite the bold nature of this challenge, NASA’s Mercury project was already well underway in developing the technology to put the first human on the moon. In February 1962, the more powerful Atlas missile propelled John Glenn into orbit, and thereby restored some form of parity between the USA and the Soviet Union. The last of the Mercury flights were scheduled for 1963 with Gordon Cooper orbiting the Earth for nearly 1.5 days. The family of Atlas rockets remains one of the most successful to this day. Apart from launching a number of astronauts into space during the Mercury project, the Atlas has been used for bringing commercial, scientific and military satellites into orbit.

    Following the Mercury missions, the Gemini project made significant strides towards a successful Moon flight. The Gemini capsule was propelled by an even more power ICBM, the Titan, and allowed astronauts to remain in space for up to two weeks, during which astronauts had the first experience with space-walking, and rendezvous and docking procedures with the Gemini spacecraft. An incredible ten Gemini missions were flown throughout 1965-66. The high success rate of the missions was testament to the improving reliability of NASA’s rockets and spacecraft, and allowed NASA engineers to collect invaluable data for the coming Apollo Moon missions. The Titan missile itself, remains as one of the most successful and long-lived rockets (1959-2005), carrying the Viking spacecraft to Mars, the Voyager probe to the outer solar system, and multiple heavy satellites into orbit. At about the same time, around the early 1960s, an entire family of versatile rockets, the Delta family, was being developed. The Delta family became the workhorse of the US space programme achieving more than 300 launches with a reliability greater than 95% percent! The versatility of the Delta family was based on the ability to tailor the lifting capability, using different interchangeable stages and external boosters that could be added for heavier lifting.

    At this point, the tide had mostly turned. The United States had been off to a slow start but had used the data from their early failures to improve the design and reliability of their rockets. The Soviets, while being more successful initially, could not achieve the same rate of launch success and this significantly hampered their efforts during the upcoming race to the moon.

    The Delta 4 rocket family (Photo Credit: United Launch Alliance)
    The Delta 4 rocket family (Photo Credit: United Launch Alliance)

    To get to the moon, a much more powerful rocket than the Titan or Delta rockets would be needed. This now infamous rocket, the 110.6 m (330 feet) tall Saturn V (check out this drawing), consisted of three separate main rocket stages; the Apollo capsule with a small fourth propulsion stage for the return trip; and a two-staged lunar lander, with one stage for descending onto the Moon’s surface and the other for lifting back off the Moon. The Saturn V was largely the brainchild and crowning achievement of Wernher von Braun, the original lead developer of the V-2 rocket in WWII Germany, with a capability of launching 140,000 kg (310,000 lb) into low-Earth orbit and 48,600 kg (107,100 lb) to the Moon. This launch capability dwarfed all previous rockets and to this day remains the tallest, heaviest and most powerful rocket ever built to operational flying status (last on the chart at the start of the piece). NASA’s efforts reached their glorious climax with the Apollo 11 mission on July 20, 1969 when astronaut Neil Armstrong became the first man to set foot on the Moon, a mere 11.5 years after the first successful launch of the Explorer I satellite. The Apollo 11 mission became the first of six successful Moon landings throughout the years 1969-1972. A smaller version of the moon rocket, the Saturn IB, was also developed and used for some of the early Apollo test missions and later to transport three crews to the US space station Skylab.

    The Space Shuttle

    Space shuttle launch
    The Space Shuttle “Discovery”

    NASA’s final major innovation was the Space Shuttle. The idea behind the Space Shuttle was to design a reusable rocket system for carrying crew and payload into low-Earth orbit. The rationale behind this idea is that manufacturing the rocket hardware is a major contributor to the overall launch costs, and that allowing different stages to be destroyed after launch is not cost effective. Imagine having to throw away your Boeing 747 or Airbus A380 every time you fly from London to New York. In this case ticket prices would not be where they are now. The Shuttle consisted of a winged airplane-looking spacecraft that was boosted into orbit by liquid-propellant engines on the Shuttle itself, fuelled from a massive orange external tank, and two solid rocket booster attached to either side. After launch, the solid-rocket boosters and external fuel tank were jettisoned, and the boosters recovered for future use. At the end of a Shuttle mission, the orbiter re-entered Earth’s atmosphere, and then followed a tortuous zig-zag course, gliding unpowered to land on a runway like any other aircraft. Ideally NASA promised that the Shuttle was going to reduce launch costs by 90%. However, crash landings of the solid rocket boosters in water often damaged them beyond repair, and the effort required to service the orbiter heat shield, inspecting each of the 24,300 unique tiles separately, ultimately led to the cost of putting a kilogram of payload in orbit to be greater than for the Saturn V rocket that preceded it. The five Shuttles, the Endeavour, Discovery, Challenger, Columbia and Atlantis, completed 135 missions between 1981 and 2011 with the tragic loss of the Challenger in 1983 and the Columbia in 2003. While the Shuttle facilitated the construction of the International Space Station and the installation of the Hubble space telescope in orbit, the ultimate goal of economically sustainable space travel was never achieved.

    However, this goal is now on the agenda of commercial space companies such as SpaceX, Reaction Engines, Blue Origin, Rocket Lab and the Sierra Nevada Corporation.

    New approaches

    After the demise of the Space Shuttle programme in 2011, the US’ capability of launching humans into space was heavily restricted. NASA is currently working on a new Space Launch System (SLS), the aim of which is to extend NASA’s range beyond low-Earth orbit and further out into the Solar system. Although the SLS is being designed and assembled by NASA, other partners such as Boeing, United Launch Alliance, Orbital ATK and Aerojet Rocketdyne are co-developing individual components. The SLS specification as it stands would make it the most powerful rocket in history and the SLS is therefore being developed in two stages (reminiscent of the Saturn IB and Saturn V rocket). First, a rocket with a payload capability of 70 metric tons (175,000 lb) is being developed from components of previous rockets. The goal of this heritage SLS is to conduct two lunar flybys with the Orion spacecraft, one unmanned and the other with a crew. Second, a more advanced version of the SLS with a payload capability of 130 metric tons (290,000 lb) to low-earth orbit, about the same payload capacity and 20% more thrust than the Saturn V rocket, is deemed to carry scientific equipment, cargo and the manned Orion capsule into deep space. The first flight for an unmanned Orion capsule on a trip around the moon is planned for 2018, while manned missions are expected by 2021-2023. By 2026 NASA plans to send a manned Orion capsule to an asteroid previously placed into lunar orbit by a robotic “capture-and-place” mission.

    NASA’s upgrade plan for the SLS

    However, with the commercialisation of space travel new incumbents are now working on even more daunting goals. The SpaceX Falcon 9 rocket has proven to be a very reliable launch system (with a current success rate of 20 out of 22 launches). Furthemore, SpaceX was the first private company to successfully launch and recover an orbital spacecraft, the Dragon capsule, which regularly supplies the ISS with supplies and new scientific equipment. Currently, the US relies on the Russian Soyuz rocket to bring astronauts to the ISS but in the near future manned missions are planned with the Dragon capsule. The Falcon 9 rocket is a two-stage-to-orbit launch vehicle comprised of nice SpaceX Merlin rocket engines fuelled by liquid oxygen and kerosene with a payload capacity of 13 metric tons (29,000 lb) into low-Earth orbit. There have been three versions of the Falcon 9, v1.0 (retired), v1.1 (retired) and most recently the partially reusable full thrust version, which on December 22, 2015 used propulsive recovery to land the first stage safely in Cape Canaveral. To date, efforts are being made to extend the landing capabilities from land to sea barges. Furthermore, the Falcon Heavy with 27 Merlin engines (a central Falcon 9 rocket with two Falcon 9 first stages strapped to the sides) is expected to extend SpaceX’s lifting capacity to 53 metric tons into low-Earth orbit, making it the second most powerful rocket in use after NASA’s SLS. First flights of the Falcon Heavy are expected for late this year (2016). Of course, the ultimate goal of SpaceX’s CEO Elon Musk, is to make humans a multi planetary species, and to achieve this he is planning to send a colony of a million humans to Mars via the Mars Colonial Transporter, a space launch system of reusable rocket engines, launch vehicles and space capsules. SpaceX’s Falcon 9 rocket already has the lowest launch costs at $60 million per launch, but reliable re-usability should bring these costs down over the next decade such that a flight ticket to Mars could become enticing for at least a million of the richest people on Earth (or perhaps we could sell spots on “Mars – A Reality TV show“).

    When will this become reality?
    When will this become reality?

    Blue Origin, the rocket company of Amazon founder Jeff Bezos, is taking a similar approach of vertical takeoff and landing to re-usability and lower launch costs. The company is on an incremental trajectory to extend its capabilities from suborbital to orbital flight, led by its motto “Gradatim Ferocity” (latin for step by step, ferociously). Blue Origin’s New Shepard rocket underwent its first test flight in April 2015. In November 2015 the rocket landed successfully after a suborbital flight to 100 km (330,000 ft) altitude and this was extended to 101 km (333,000 ft) in January 2016. Blue hopes to extend its capabilities to human spaceflight by 2018.

    Reaction Engines is a British aerospace company conducting research into space propulsion systems focused on the Skylon reusable single-stage-to-orbit spaceplane. The Skylon would be powered by the SABRE engine, a rocket-based combined cycle, i.e. a combination of an air-breathing jet engine and a rocket engine, whereby both engines share the same flow path, reusable for about 200 flights. Reaction Engines believes that with this system the cost of carrying one kg (2.2 lb) of payload into low-earth orbit can be reduced from the $1,500 today (early 2016) to around $900. The hydrogen-fuelled Skylon is designed to take-off from a purpose built runway and accelerate to Mach 5 at 28.5 km (85,500 feet) altitude using the atmosphere’s oxygen as oxidiser. This air-breathing part of the SABRE engine works on the same principles as a jet engine. A turbo-compressor is used to raise the pressure ratio of the incoming atmospheric air, which is pre-staged by a pre-cooler to cool the hot air impinging on the engine at hypersonic speeds. The compressed air is fed into a rocket combustion chamber where it is ignited with liquid hydrogen. As in a standard jet engine, a high pressure ratio is crucial to pack as much of the oxidiser into the combustion chamber and increase the thrust of the engine. As the natural source of oxygen runs out at high altitude, the engines switch to the internally stored liquid oxygen supplies, transforming the engine into a closed-cycle rocket and propelling the Skylon spacecraft into orbit. The theoretical advantages of the SABRE engine is its high fuel efficiency and low mass, which facilitate the single-stage-to-orbit approach. Reminiscent of the Shuttle, after deploying the its payload of up to 15 tons (38,000 lb), the Skylon spacecraft would then re-enter the atmosphere protected by a heat shield and land on a runway. The first ground tests of the SABRE engine are planned for 2019 and first unmanned test flights are expected for 2025.

    SABRE rocket engine
    SABRE rocket engine

    Sierra Nevada Corporation is working alongside NASA to develop the Dream Chaser spacecraft for transporting cargo and up to seven people to low-earth orbit. The Dream Chaser is designed to launch on top of the Atlas V rocket (in place of the nose cone) and land conventionally by gliding onto a runway. The Dream Chaser looks a lot like a smaller version of the Space Shuttle, so intuitively one would expect the same cost inefficiencies as for the Shuttle. However, the engineers at Sierra Nevada say that two changes have been made to the Dream Chaser that should reduce the maintenance costs. First, the thrusters used for attitude control are ethanol-based, and therefore not toxic and a lot less volatile than the hydrazine-based thursters used by the Shuttle. This should allow maintenance of the Dream Chaser to ensue immediately after landing and reduce the time between flights. Second, the thermal protection system is based on an ablative tile that can survive multiple flights and can be replaced in larger groups rather than tile-by-tile. The Dream Chaser is planned to undergo orbital test flights in November 2016.

    Dream Chaser pre-drop tests.6
    The Dream Chaser

    Finally, the New Zealand-based firm Rocket Lab is developing the all-carbon composite liquid-fuelled Electron rocket with a payload capability to low-Earth orbit of 110 kg (240 lb). Thus, Rocket Lab is focusing on high-frequency rocket launches to transport low-mass payload, e.g. nano satellites, into orbit. The goal of Rocket Lab is to make access to space frequent and affordable such that the rapidly evolving small-scale satellites that provide us with scientific measurements and high-speed internet can be launched reliably and quickly. The Rocket Lab system is designed to cost $5 million per launch at 100 launches a year and use less fuel than a flight on a Boeing 737 from San Francisco to Los Angeles. A special challenge that Rocket Lab is facing is the development of the all-carbon composite liquid oxygen tanks to provide the mass efficiency required for this high fuel efficiency. To date the containment of cryogenic (super cold) liquid fuels, such as liquid hydrogen and liquid oxygen, is still the domain of metallic alloys. Concerns still exist about potential leaks due to micro cracks developing in the resin of the composite at cryogenic temperatures. In composites, there is a mismatch between the thermal expansion coefficients of the reinforcing fibre and the resin, which induces thermal stresses as the composite is cooled to cryogenic temperatures from its high temperature/high pressure curing process. The temperature and pressure cycles during the liquid oxygen/hydrogen fill-and-drain procedures then induces extra fatigue loading that can lead to cracks permeating through the structure through which hydrogen or oxygen molecules can easily pass. This leaking process poses a real problem for explosion.

    Where do we go from here?

    As we have seen, over the last 2000 years rockets have evolved from simple toys and military weapons to complex machines capable of transporting humans into space. To date, rockets are the only viable gateway to places beyond Earth. Furthermore, we have seen that the development of rockets has not always followed a uni-directional path towards improvement. Our capability to send heavier and heavier payloads into space peaked with the development of the Saturn V rocket. This great technological leap was fuelled, to a large extent, by the competitive spirit of the Soviet Union and the United States. Unprecedented funds were available to rocket scientists on both sides during the 1950-1970s. Furthermore, dreamers and visionaries such as Jules Verne, Konstantin Tsiolkovsky and Gene Roddenberry sparked the imagination of the public and garnered support for the space programs. After the 2003 Columbia disaster, public support for spending taxpayer money on often over-budget programs understandably waned. However, the successes of incumbent companies, their fierce competition and visionary goals of colonising Mars are once again inspiring a younger generation. This is, once again, an exciting time for rocketry.

  • Control and Stability of Aircraft

    One of the key factors in the Wright brothers’ achievement of building the first heavier-than-air aircraft was their insight that a functional airplane would require a mastery of three disciplines:

    1. Lift
    2. Propulsion
    3. Control

    Whereas the first two had been studied to some success by earlier pioneers such as Sir George Cayley, Otto Lilienthal, Octave Chanute, Samuel Langley and others, the question of control seemed to have fallen by the wayside in the early days of aviation. Even though the Wright brothers build their own little wind tunnel to experiment with different airfoil shapes (mastering lift) and also built their own lightweight engine (improving propulsion) for the Wright flyer, a bigger innovation was the control system they installed on the aircraft.

    1902 Wright glider turns
    The Wright Flyer: Wilbur makes a turn using wing-warping and the movable rudder, October 24, 1902. By Attributed to Wilbur Wright (1867–1912) and/or Orville Wright (1871–1948). [Public domain], via Wikimedia Commons.
    Fundamentally, an aircraft manoeuvres about its centre of gravity and there are three unique axes about which the aircraft can rotate:

    1. The longitudinal axis from nose to tail, also called the axis of roll, i.e. rolling one wing up and one wing down.
    2. The lateral axis from wing tip to wing tip, also called the axis of pitch, i.e. nose up or nose down.
    3. The normal axis from the top of the cabin to the bottom of landing gear, also called the axis of yaw, i.e. nose rotates left or right.

    Yaw Axis Corrected
    Aircraft Principal Axes (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons
    In a conventional aircraft we have a horizontal elevator attached to the tail to control the pitch. Second, a vertical tail plane features a rudder (much like on a boat) that controls the yawing. Finally, ailerons fitted to the wings can be used to roll the aircraft from side to side. In each case, a change in attitude of the aircraft is accomplished by changing the lift over one of these control surfaces.
    For example:

    1. Moving the elevator down increases the effective camber across the horizontal tail plane, thereby increasing the aerodynamic lift at the rear of the aircraft and causing a nose-downward moment about the aircraft’s centre of gravity. Alternatively, an upward movement of the elevator induces a nose-up movement.
    2. In the case of the rudder, deflecting the rudder to one side increases the lift in the opposite direction and hence rotates the aircraft nose in the direction of the rudder deflection.
    3. In the case of ailerons, one side is being depressed while the other is raised to produce increased lift on one side and decreased lift on the other, thereby rolling the aircraft.

    ControlSurfaces
    Aircraft Control Surfaces By Piotr Jaworski (http://www.gnu.org/copyleft/fdl.html) via Wikimedia Commons
    In the early 20th century the notion of using an elevator and rudder to control pitching and yawing were appreciated by aircraft pioneers. However, the idea of banking an aircraft to control its direction was relatively new. This is fundamentally what the Wright brothers understood. Looking at the Wright Flyer from 1903 we can clearly see a horizontal elevator at the front and a vertical rudder at the back to control pitch and yaw. But the big innovation was the wing warping mechanism which was used to control the sideways rolling of the aircraft. Check out the video below to see the elevator, rudder and wing warping mechanisms in action.


    Today, many other control systems are being used in addition to, or instead of, the conventional system outlined above. Some of these are:

    1. Elevons – combined ailerons and elevators.
    2. Tailerons – two differentially moving tailplanes.
    3. Leading edge slats and trailing edge flaps – mostly for increased lift at takeoff and landing.

    But ultimately the action of operation is fundamentally the same, the lift over a certain portion of the aircraft is changed, causing a moment about the centre of gravity.

    Special Aileron Conditions
    Two special conditions arise in the operation of the ailerons.

    The first is known as adverse yaw. As the ailerons are deflected, one up and one down, the aileron pointing down induces more aerodynamic drag than the aileron pointing up. This induced drag is a function of the amount of lift created by the airfoil. In simplistic terms, an increase in lift causes more pronounced vortex shedding activity, and therefore a high-pressure area behind the wing, which acts as a net retarding force on the aircraft. As the downward pointing airfoil produces more lift, induced drag is correspondingly greater. This increased drag on the downward aileron (upward wing) yaws the aircraft towards this wing, which must be counterbalanced by the rudder. Aerodynamicists can counteract the adverse yawing effect by requiring that the downward pointing aileron deflects less than the upward pointing one. Alternatively, Frise ailerons are used, which employ ailerons with excessively rounded leading edges to increase the drag on the upward pointing aileron and thereby help to counteract the induced drag on the downward pointing aileron of the other wing. The problem with Frise ailerons is that they can lead to dangerous flutter vibrations, and therefore differential aileron movement is typically preferred.

    The second effect is known as aileron reversal, which occurs under two different scenarios.

    • At very low speeds with high angles of attack, e.g. during takeoff or landing, the downward deflection of an aileron can stall a wing, or at the least reduce the lift across the wing, by increasing the effective angle of attack past sustainable levels (boundary layer separation). In this case, the downward aileron produces the opposite of the intended effect.
    • At very high airspeeds, the upward or downward deflection of an aileron may produce large torsional moments about the wing, such that the entire wing twists. For example, a downward aileron will twist the trailing edge up and leading edge down, thereby decreasing the angle of attack and consequently also the lift over that wing rather than increasing it. In this case, the structural designer needs to ensure that the torsional rigidity of the wing is sufficient to minimise deflections under the torsional loads, or that the speed at which this effect occurs is outside the design envelope of the aircraft.

    Stability
    What do we mean by the stability of an aircraft? Fundamentally we have to discern between the stability of the aircraft to external impetus, with and without the pilot responding to the perturbation. Here we will limit ourselves to the inherent stability of the aircraft. Hence the aircraft is said to be stable if it returns back to its original equilibrium state after a small perturbing displacement, without the pilot intervening. Thus, the aircraft’s response arises purely from the inherent design. At level flight we tend to refer to this as static stability. In effect the airplane is statically stable when it returns to the original steady flight condition after a small disturbance; statically unstable when it continues to move away from the original steady flight condition upon a disturbance; and neutrally stable when it remains steady in a new condition upon a disturbance. The second, and more pernicious type of stability is dynamic stability. The airplane may converge continuously back to the original steady flight state; it may overcorrect and then converge to the original configuration in a oscillatory manner; or it can diverge completely and behave uncontrollably, in which case the pilot is well-advised to intervene. Static instability naturally implies dynamic instability, but static stability does not generally guarantee dynamic stability.

    Aircraft static longitudinal stability
    Three cases for static stability: following a pitch disturbance, aircraft can be either unstable, neutral, or stable. By Olivier Cleynen via Wikimedia Commons.
    Longitudinal/Directional stability
    By longitudinal stability we refer to the stability of the aircraft around the pitching axis. The characteristics of the aircraft in this respect are influenced by three factors:

    1. The position of the centre of gravity (CG). As a rule of thumb, the further forward (towards the nose) the CG, the more stable the aircraft with respect to pitching. However, far-forward CG positions make the aircraft difficult to control, and in fact the aircraft becomes increasingly nose heavy at lower airspeeds, e.g. during landing. The further back the CG is moved the less statically stable the aircraft becomes. There is a critical point at which the aircraft becomes neutrally stable and any further backwards movement of the CG leads to uncontrollable divergence during flight.
    2. The position of the centre of pressure (CP). The centre of pressure is the point at which the aerodynamic lift forces are assumed to act if discretised onto a single point. Thus, if the CP does not coincide with the CG, pitching moments will naturally be induced about the CG. The difficulty is that the CP is not static, but can move during flight depending on the angle of incidence of the wings.
    3. The design of the tailplane and particularly the elevator. As described previously, the role of the elevator is to control the pitching rotations of the aircraft. Thus, the elevator can be used to counter any undesirable pitching rotations. During the design of the tailplane and aircraft on a whole it is crucial that the engineers take advantage of the inherent passive restoring capabilities of the elevator. For example, assume that the angle of incidence of the wings increases (nose moves up) during flight as a result of a sudden gust, which gives rise to increased wing lift and a change in the position of the CP. Therefore, the aircraft experiences an incremental change in the pitching moment about the CG given by

    [latex](\text{Incremental increase in lift}) \times (\text{new distance of CP from CG})[/latex]

    At the same time, the elevator angle of attack also increases due to the nose up/tail down perturbation. Hence, the designer has to make sure that the incremental lift of the elevator multiplied by its distance from the CG is greater than the effect of the wings, i.e.

    [latex](\text{Incremental increase in lift} \times \text{new distance of CP from CG})_{elevator} > (\text{Incremental increase in lift} \times \text{new distance of CP from CG})_{wings}[/latex]

    As a result the interplay between CP and CG, tailplane design greatly influences the degree of static pitching stability of an aircraft. In general, due to the general tear-drop shape of an aircraft fuselage, the CP of an aircraft is typically ahead of it’s CG. Thus, the lift forces acting on the aircraft will always contribute some form of destabilising moment about the CG. It is mainly the job of the vertical tailplane (the fin) to provide directional stability, and without the fin most aircraft would be incredibly difficult to fly if not outright unstable.

    Lateral Stability
    By lateral stability we are referring to the stability of the aircraft when rolling one wing down/one wing up, and vice versa. As an aircraft rolls and the wings are no longer perpendicular to the direction of gravitational acceleration, the lift force, which acts perpendicular to the surface of the wings, is also no longer parallel with gravity. Hence, rolling an aircraft creates both a vertical lift component in the direction of gravity and a horizontal side load component, thereby causing the aircraft to sideslip. If these sideslip loads contribute towards returning the aircraft to its original configuration, then the aircraft is laterally stable. Two of the more popular methods of achieving this are:

    1. Upward-inclined wings, which take advantage of the dihedral effect. As an aircraft is disturbed laterally, the rolling action to one side results in a greater angle of incidence on the downward-facing wing than the upward-facing one. This occurs because the forward and downward motion of the wing is equivalent to a net increase in angle of attack, whereas the forward and upward motion of the other wing is equivalent to a net decrease. Therefore, the lift acting on the downward wing is greater than on the upward wing. This means that as the aircraft starts to roll sideways, the lateral difference in the two lift components produces a moment imbalance that tends to restore the aircraft back to its original configuration. This is in effect a passive controlling mechanism that does not need to be initiated by the pilot or any electronic stabilising control system onboard. The opposite destabilising effect can be produced by downward pointing anhedral wings, but conversely this design improves manoeuvrability.

      The Dihedral Effect. Figure from (1)
      The Dihedral Effect with Sideslip. Figure from (1).
    2. Swept back wings. As the aircraft sideslips, the downward-pointing wing has a shorter effective chord length in the direction of the airflow than the upward-pointing wing. The shorter chord length increases the effective camber (curvature) of the lower wing and therefore leads to more lift on the lower wing than on the upper. This results in the same restoring moment discussed for dihedral wings above.

      The Sweepback Effect of Shortened Chord. Figure from (1).
      The Sweepback Effect of Shortened Chord. Figure from (1).

    It is worth mentioning that the anhedral and backward wept wings can be combined to reach a compromise between stability and manoeuvrability. For example, an aircraft may be over-designed with heavily swept wings, with some of the stability then removed by an anhedral design to improve the manoeuvrability.

    From Calcin and Hobbes Daily (http://calvinhobbesdaily.tumblr.com/image/137916137184)
    From Calvin and Hobbes Daily (http://calvinhobbesdaily.tumblr.com/image/137916137184)

    Interaction of Longitudnal/Directional and Lateral Stability
    As described above, movement of the aircraft in one plane is often coupled to movement in another. The yawing of an aircraft causes one wing to move forwards and the other backwards, and thus alters the relative velocities of the airflow over the wings, thereby resulting in differences in the lift produced by the two wings. The result is that yawing is coupled to rolling. These interaction and coupling effects can lead to secondary types of instability.

    For example, in spiral instability the directional stability of yawing and lateral stability of rolling interact. When we discussed lateral stability, we noted that the sideslip induced by a rolling disturbance produces a restoring moment against rolling. However, due to directional stability it also produces a yawing effect that increases the bank. The relative magnitude of the lateral and directional restoring effects define what will happen in a given scenario. Most aircraft are designed with greater directional stability, and therefore a small disturbance in the rolling direction tends to lead to greater banking. If not counterbalanced by the pilot or electronic control system, the aircraft could enter an ever-increasing diving turn.

    Another example is the dutch roll, an intricate back-and-forth between yawing and rolling. If a swept wing is perturbed by a yawing disturbance, the now slightly more forward-pointing wing generates more lift, exactly for the same argument as in the sideswipe case of shorter effective chord and larger effective area to the airflow. As a result, the aircraft rolls to the side of the slightly more backward-pointing wing. However, the same forward-pointing wing with higher lift also creates more induced drag, which tends to yaw the aircraft back in the opposite direction. Under the right circumstances this sequence of events can perpetuate to create an uncomfortable wobbling motion. In most aircrafts today, dampers in the automatic control system are installed to prevent this oscillatory instability.

    In this post I have only described a small number of control challenges that engineers face when designing aircraft. Most aircraft today are controlled by highly sophisticated computer programmes that make loss of control or stability highly unlikely. Free unassisted “Flying-by-wire”, as it is called, is getting rarer and mostly limited to start and landing manoeuvres. In fact, it is more likely that the interface between human and machine is what will cause most system failures in the future.

    References

    (1) Richard Bowyer (1992). Aerodynamics for the Professional Pilot. Airlife Publishing Ltd., Shrewsbury, UK.

  • Risk and failure in complex engineering systems

    “We must ensure this never happens again.”

    This is a common reaction to instances of catastrophic failure. However, in complex engineering systems, this statement is inherently paradoxical. If the right lessons are learned and the appropriate measures are taken, the same failure will most likely never happen again. But, catastrophes in themselves are not completely preventable, such that the next time around, failure will occur somewhere new and unforeseen. Welcome to the world of complexity.

    Boiled down to its fundamentals, engineering deals with the practical – the development of tools that work as intended. Failure is a human condition, and as such, all man-made systems are prone to failure. Furthermore, success should not be defined as the absence of failure, but rather how we cope with failure and learn from it – how we conduct ourselves in spite of failure.

    Failure and risk are closely linked. The way I define risk here is the probability of an irreversible negative outcome. In a perfect world of complete knowledge and no risk, we know exactly how a system will behave beforehand and have perfect control of all outcomes. Hence, in such an idealised world there is very little room for failure. In the real world however, knowledge is far from complete, people and man-made systems behave and interact in unforeseen ways, and changes in the surrounding environmental conditions can drastically alter the intended behaviour. Therefore, our understanding of and attitude towards risk, plays a major role in building safe engineering systems.

    The first step is to acknowledge that our perception of risk is very personal. It is largely driven by human psychology and depends on a favourable balance of risk and reward. For example, there is a considerable higher degree of fear of flying than fear of driving, even though air travel is much safer than road travel. As plane crashes are typically more severe than car crashes, it is easy to form skewed perceptions of the respective risks involved. What is more, driving a car, for most people a daily activity, is far more familiar than flying an airplane.

    Second, science and engineering do not attempt to predict or guarantee a certain future. There will never be a completely stable, risk free system. All we can hope to achieve is a level of risk that is comparable to that of events beyond our control. Risk and uncertainty arise in the gap between what we know and what we don’t – between how we design the system to behave and how it can potentially behave. This knowledge gap can lead to two types of risk. There are certain things we appreciate that we do not understand, i.e. the known unknowns. Second, and more pernicious, are those things we are not even aware of, i.e. the unknown unknowns, and it is these failures that wreak the most havoc. So how do we protect ourselves against something we don’t even see coming? How do engineers deal with this second type of risk?

    The first strategy is the safety factor or margin of safety. A safety factor of 2 means that if a bridge is expected to take a maximum service load of X (also called the demand), then we design the structure to hold 2X (also called the capacity). In the aerospace industry, safety protocols require all parts to maintain integrity up to 1.2x the service load, i.e. a limit safety factor of 1.2. Furthermore, components need to sustain 1.5x the service load for at least three seconds, the so-called ultimate safety factor. In some cases, statistical techniques such as Monte Carlo analyses are used to calculate the probability that the demand will exceed the capacity.

    The second strategy is to employ redundancies in the design. Hence, back-ups or contingencies are in place to prevent a failure from progressing to catastrophic levels. In structural design, for example, this means that there is enough untapped capacity within the structure, such that a local failure leads to a rebalancing/redirection of internal loads without inducing catastrophic failure. Part of this analysis includes the use of event and fault trees that require engineers to conjure the myriad of ways in which a system may fail, assign probabilities to these events, and then try to ascertain how a particular failure affects other parts of the system.

    Event Tree Diagram
    Even tree diagram (via Wikimedia Commons).
    Unfortunately, some engineering systems today have become so complex that it is difficult to employ fault and event trees reliably. Rising complexity means that is impossible to know all functional interactions beforehand, and it is therefore difficult, if not impossible, to predict exactly how failure in one part of the system will affect other parts. This phenomenon has been popularised by the “butterfly effect” – a scenario in which, in an all-connected world, the stroke of a butterfly’s wings on one side of the planet, causes an earthquake on the other.

    The increasing complexity in engineering systems is driven largely by the advance of technology based on our scientific understanding of physical phenomena at increasingly smaller length scales. For example, as you are reading this on your computer or smartphone screen, you are, in fact, interacting with a complex system that spans many different layers. In very crude terms, your internet browser sits on top of an operating system, which is programmed in one or many different programming languages, and these languages have to be translated to machine code to interact with the microprocessor. In turn, the computer’s processor interacts with other parts of the hardware such as the keyboard, mouse, disc drives, power supply, etc. which have to interface seamlessly for you to be able to make sense of what appears on screen. Next, the computer’s microprocessor is made up of a number of integrated circuits, which are comprised of registers and memory cells, which are further built-up from a network of logic gates, which ultimately, are nothing but a layer of interconnected semiconductors. Today, the expertise required to handle the details at a specific level is so vast, that very few people understand how the system works at all levels.

    In the world of aviation, the Wright brothers were the first to realise that no one would ever design an effective aircraft without an understanding of the fields of propulsion, lift and control. Not only did they understand the physics behind flight, Orville and Wilbur were master craftsmen from years of running their own bike shop, and later went as far as building the engine for the Wright Flyer themselves. Today’s airplanes are of course significantly more sophisticated than the aircraft 100 years ago, such that in-depth knowledge of every aspect of a modern jumbo jet is out of the question. Yet, the risk of increasing specialism is that there are fewer people that understand the complete picture, and appreciate the complex interactions that can emerge from even simple, yet highly interconnected processes.

    With increasing complexity, the solution should not be further specialisation and siloing of information, as this increases the potential for unknown risks. For example, consider the relatively simple case of a double pendulum. Such a system is described by chaotic behaviour, that is, we know and understand the underlying physics of the problem, yet it is impossible to predict how the pendulum will swing a priori. This is because at specific points, the system can bifurcate into a number of different paths, and the exact behaviour depends on the nature of the initial conditions when the system is started. These bifurcations can be very sensitive to small differences in the initial conditions, such that two processes that start with almost the same, but not identical, initial conditions can diverge considerably after only a short time.

    Double-compound-pendulum
    A double rod pendulum animation showing chaotic behaviour (via Wikimedia Commons).
    Under these circumstances, even small local failures within a complex system can cascade rapidly, accumulate and cause global failure in unexpected ways. Thus, the challenge in designing robust systems arises from the fact that the performance of the complete system cannot be predicted by an isolated analysis of its constituent parts by specialists. Rather, effective and safe design requires holistic systems thinking. A key aspect of systems thinking is to acknowledge that the characteristics of a specific layer emerges from the interacting behaviour of the components working at the level below. Hence, even when the behaviour of a specific layer is governed by understood deterministic laws, the outcome of these laws cannot be predicted with certainty beforehand.

    In this realm, engineers can learn from some of the strategies employed in medicine. Oftentimes, the origin, nature and cure of a disease is not clear beforehand, as the human body is its own example of a complex system with interacting levels of cells, proteins, molecules, etc. Some known cures work even though we do not understand the underlying mechanism, and some cures are not effective even though we understand the underlying mechanism. Thus, the engineering design process shifts from well-defined rules of best practise (know first then act) to emergent (act first then know), i.e. a system is designed to the best of current knowledge and then continuously iterated/refined based on reactions to failure.

    In this world, the role of effective feedback systems is critical, as flaws in the design can remain dormant for many years and emerge suddenly when the right set of external circumstances arise. As an example, David Blockley provides an interesting analogy of how failures incubate in his book “Engineering: A very short introduction.”

    “…[Imagine] an inflated balloon where the pressure of the air in the balloon represents the ‘proneness to failure’ of a system. … [W]hen air is first blown into the balloon…the first preconditions for [an] accident are established. The balloon grows in size and so does the ‘proneness to failure’ as unfortunate events…accumulate. If [these] are noticed, then the size of the balloon can be reduced by letting air out – in other words, [we] reduce some of the predisposing events and reduce the ‘proneness to failure’. However, if they go unnoticed…, then the pressure of events builds up until the balloon is very stretched indeed. At this point, only a small trigger event, such as a pin or lighted match, is needed to release the energy pent up in the system.”

    Often, this final trigger is blamed as the cause of the accident. But it isn’t. If we prick the balloon before blowing it up, it will subsequently leak and not burst. The over-stretched balloon itself is the reason why an accident can happen in the first place. Thus, in order to reduce the likelihood of failure, the accumulation of preconditions has to be monitored closely, and necessary actions proposed to manage the problem.

    The main challenge for engineers in the 21st century is not more specialisation, but the integration of design teams from multiple levels to facilitate multi-disciplinary thinking across different functional boundaries. Perhaps, the most important lesson is that it will never be possible to ensure that failures do not occur. We cannot completely eliminate risk, but we can learn valuable lessons from failures and continuously improve engineering systems and design processes to ensure that the risks are acceptable.

     

    References
    David Blockley (2012). Engineering: A very short introduction. Oxford University Press. Oxford, UK.

  • The Dangers of Outsourcing

    “Outsourcing” is a loaded term. In today’s globalised world it has become to mean many things – from using technology to outsource rote work over the internet to sharing capacity with external partners that are more specialised to complete a certain task. However, inherent in the idea of outsourcing is the promise of reduced costs, either through reductions in labour costs, or via savings in overheads and tied-up capital.

    I recently stumbled across a 2001 paper [1] by Dr Hart-Smith of the Boeing Company, discussing some of the dangers and fallacies in our thinking regarding the potential advantages of outsourcing. The points raised by Hart-Smith are particularly noteworthy as they deal with the fundamental goals of running a business rather than trying to argue by analogy, or blind faith on proxy measurements. What follows is my take on the issue of outsourcing as it pertains to the aerospace industry only, loosely based on the insights provided by Dr Hart-Smith, and with some of my own understanding of the topic from disparate sources that I believe are pertinent to the discussion.

    That being said, the circumstances under which outsourcing makes economical sense depends on a broad spectrum of variables and is therefore highly complex. If you feel that my thinking is misconstrued in any way, please feel free to get in touch. With that being said let’s delve a bit deeper into the good, the bad and the ugly of the outsourcing world.

    Any discussion on outsourcing can, in my opinion, be boiled down to two fundamental drivers:

    1. The primary goal of running a business: making money. Taking non-profits aside, a business exists to make a profit for its shareholders. If a business doesn’t make any money today, or isn’t expected to make a profit in the future, i.e. is not valuable on a net present value basis, then it is a lousy business. Any other metric that is used to measure the performance of a business, be it efficiency ratios such as return on capital employed, are helpful proxies but not the ultimate goal.
    2. Outsourcing is based on Ricardo’s idea of comparative advantage, that is, if two parties decide to specialise in the production of two different goods and decide to trade, both parties are better off than if they produced both goods for autarchic use only, even if one party is more efficient than the other at producing both goods at the same time.

    Using these two points as our guidelines it becomes clear very quickly under what conditions a company should decide to outsource a certain part of its business:

    • Another company is more specialised in this line of business and can therefore create a higher-quality product. This can either be achieved via:
      • Better manufacturing facilities, i.e. more precisely dimensioned components that save money in the final assembly process
      • Superior technical expertise. A good example are the jet engines on an aircraft. Neither Boeing nor Airbus design or manufacture their own engines as the complexity of this particular product means that other companies have specialised to make a great product in this arena.
    • The rare occasion that outsourcing a particular component of an aircraft results in a net overall profit for the entire design and manufacturing project. However, the decision to outsource should never be based on the notion of reduced costs for a single component, as there is no one-to-one causation between reducing costs for a single component and  increased profits for the whole project.

    Note, that in either case the focus is on receiving extra value for something the company pays for rather than on reducing costs. In fact, as I will explain below, outsourcing often leads to increases in cost, rather than cost reductions. Under these circumstances, it only makes sense to outsource if this additional cost is traded for extra value that cannot be created in house, i.e. manufacturing value or technical value.

    Reducing Costs

    Reducing costs is another buzzword that is often used to argue pro outsourcing. Considering the apparent first-order effects, it makes intuitive sense that offloading a certain segment of a business to a third party will reduce costs via lower labour costs and overheads, depreciation and capital outlays. In fact, this is one of the allures of the globalised world and the internet; the means of outsourcing work to lower-wage countries are cheaper than ever before in history.

    However, the second-order effects of outsourcing are rarely considered. The first fundamental rule of ecology is that in a complex system you can never only do one thing. As all parts of a complex system are intricately linked, perturbing the system in one area will have inevitable knock-on effects in another area. Additionally if the system behaves non-linearly to the external stimuli, these knock-on effects are non-intuitive and almost impossible to predict a priori. Outsourcing an entire segment of a project should probably be classed as a major perturbation, and as all components of a complex engineering product, such as an aircraft, are inherently linked, a decision in one area will certainly effect other areas of the project as well. Hence, consider the following second-order effects that should be accounted for as a result of outsourcing as certain line of a business:

    • Quality assurance is harder out-of-house, and hence reworking components that are not to spec may cost more in the long run.
    • Additional labour may be required in-house in order to coordinate the outsourced work, interact with the third party and interface the outsourced component with the in-house assembly team.
    • Concurrent engineering and the ability to adapt designs is much harder. In order to reduce their costs, subcontractors often operate on fixed contracts, i.e. the design specification for a component is fixed or the part to be manufactured can not be changed. Hence, the flexibility to adapt the design of a part further down the line is constricted, and this constraint may create a bottleneck for other interfacing components.
    • Costs associated with subassemblies that cannot be fitted together balloon quickly, and the ensuing rework and detective work to find the source of the imprecision delays the project.
    • There is a need for additional transportation due to off-site production and increased manufacturing time.
    • It is harder to coordinate the manufacturing schedules of multiple external subcontractors who might all be employing different planning systems, and more inventory is usually created.

    Therefore there is an inherent clash between trying to minimise costs locally, i.e. the costs for one component in isolation, and keeping costs down globally, i.e. for the entire project. In the domain of complex systems, local optimisation can lead to fragility of the system in two ways. First, small perturbations from local optima typically have greater effects on the overall performance of the system than perturbations from locally sub-optimal states. Second, locally optimising one factor of the system may force other factors to be far from their optima, and hence reduce the overall performance of the system. A general heuristic is that the best solution is to reach a compromise by operating individual components  at sub-optimal levels, i.e. with excess capacity, such that the overall system is robust to adapt to unforeseen perturbations in its operating state.

    Furthermore, the decision to outsource the design or the manufacture of a specific component needs to factored into the overall design of the product as a early as possible. Thus, all interfacing assemblies and sub-assemblies are designed with this particular reality in mind, rather than having to adapt to this situation a posteriori. This is because early design decisions have the highest impact on the final cost of a product. As a general rule of thumb, 80% of the final costs are incurred by the first 20% of the design decisions made, such that late design changes are always exponentially more expensive than earlier ones. Having to fix misaligned sub-assemblies at final assembly costs orders of magnitude more than additional planning up front.

    Finally, the theory of constraints teaches us that the performance of the overall project can never exceed that of its least proficient component. Hence, the overall quality of the final assembly is driven by the quality of its worst suppliers. This means that in order to minimise any problems, the outsourcing company needs to provide extra quality and technical support for the subcontractors, extra employees for supply chain management, and additional in-house personal to deal with the extra detail design work and project management. Dr Hart-Smith warns that

    With all this extra work the reality is that outsourcing should be considered as an extra cost rather than a cost saving, albeit, if done correctly, for the exchange of higher quality parts. The dollar value of out-sourced work is a very poor surrogate for internal cost savings.

    Outsourcing Profits

    Hypothetically, in the extreme case when every bit of design and manufacturing work is outsourced the only remaining role f0r the original equipment manufacturer (OEM) of the aircraft is to serve as a systems integrator. However, in this scenario, all profits are outsourced as well. This reality is illustrated by a simple example. The engines and avionics comprise about 50% of the total cost of construction of an aircraft, and the remaining 50% are at the OEM’s discretion. Would you rather earn a 25% profit margin on 5% of the total work, or rather 5% profit margin on 25% of the total work? In the former case the OEM will look much more profitable on paper (higher margin) but the total amount of cash earned in the second scenario will be higher. Hence, in a world where 50% of the work naturally flows to subcontractors supplying the engines, avionics and control systems, there isn’t much left of the aircraft to outsource if enough cash is to be made to keep the company in business. Without cash there is no money to pay engineers to design new aircraft and no cash on hand to serve as a temporary buffer in a downturn. If there is anything that the 20th century has taught us, is that in the world of high-tech, any company that does not innovate and purely relies on derivative products is doomed to be disrupted by a new player.

    Second, subcontractors are under exactly the same pressure as the OEM to maximise their profits. In fact, subcontractors have a greater incentive for fatter margins and higher returns on investment as their smaller size increases their interest rates for loaned capital. This means that suppliers are not necessarily incentivised to manufacture tooling that can be reused for future products as these require more design time and can not be billed against future products. In-house production is much more likely to lead to this type of engineering foresight. Consider the production of a part that is estimated to cost the same to produce in-house as by a subcontractor, and to the same quality standards. The higher profit margins of the subcontractor naturally result in a higher overall price for the component than if manufactured in-house. However, standard accounting procedures would consider this as a cost reduction since all first-order costs, such as lower labour rate at the subcontractor, fewer employees and less capital tied up in hard assets at the OEM, creates the illusion that outside work is cheaper than in-house work.

    Skin in the Game

    One of the heavily outsourced planes in aerospace history was the Douglas Aircraft Company DC-10, and it was the suppliers who made all the profits on this plane. It is instrumental that most subcontractors were not willing to be classified as risk-sharing partners. In fact, if the contracts have been negotiated properly, then most subcontractors have very little downside risk.  For financial reasons, the systems integrator can rarely allow a subcontractor to fail, and therefore provides free technical support to the subcontractor in case of technical problems. In extreme cases, the OEM is even likely to buy if subcontractor outright.

    This state of little downside risk is what NN Taleb calls the absence of “skin in the game” [2]. Subcontractors typically do not behave like employees do. Employees or “risk-sharing” partners have a reputation to protect and fear the economic repercussions of losing their paychecks. On the one hand, employees are more expensive than contractors and limit workforce flexibility. On the other hand, employees guarantee a certain dependability and reliability for solid work, i.e. downside protection to shoddy work. In Taleb’s words,

    So employees exist because they have significant skin in the game – and the risk is shared with them, enough risk for it to be a deterrent and a penalty for acts of undependability, such as failing to show up on time. You are buying dependability.

    Subcontractors on the other hand typically have more freedom than employees. They fear the law more than being fired. Financial repercussions can be built into contracts, and bad performances may lead to loss in reputation, but an employee, by being part of the organisation and giving up some of his freedom, will always have more risk, and therefore behave in more dependable ways. There are examples, like Toyota’s ecosystem of subcontractors, where mutual trust and “skin in the game” is built into the network via well thought-out profit sharing, risk sharing and financial penalties, but these relationships are not ad hoc and are based on long-term relationships.

    With a whole network of subcontractors the performance of an operation is limited by the worst-performing segment. In this environment, OEMs are often forced to assist bad-performing suppliers and therefore forced to accept additional costs. Again from NN Taleb [2],

    If you miss on a step in a process, often the entire business shuts down – which explains why today, in a supposedly more efficient world with lower inventories and more subcontractors, things appear to run smoothly and efficiently, but errors are costlier and delays are considerably longer than in the past. One single delay in the chain can stop the entire process.

    The crux of the problem is that a systems integrator, who is the one that actually sells the final product, i.e. gets paid last and carries the most tail risk, can only raise the price to levels that the market will sustain. Subcontractors, on the other hand, can push for higher margins and lock in a profit before the final plane is sold and thereby limit their exposure to cost over-runs.

    ROE

    The return on net assets or return on equity (ROE) metric is a very powerful proxy to measuring how efficiently a company uses its equity or net assets (assets – liabilities; where assets are everything the company owns and liabilities include everything the company owes) to create profit,

    [latex] ROE = \frac{Earnings}{Equity}. [/latex]

    The difference between high-ROE and low-ROE businesses is illustrated here using a mining company and a software company as (oversimplified) examples. The mining company needs a lot of physical hard assets to dig metals out of the ground, and hence ties up considerable amount of capital in its operations. A software company on the other hand is asset-light as the cost of computing hardware has exponentially fallen in line with Morse Law. Thus, if both companies make the same amount of profit, then the software company will have achieved this more efficiently than the mining company, i.e. required less initial capital to create the same amount of earnings. The ROE is a useful metric for investors, as it provides information regarding the expected rate of return on their investment. Indeed, in the long run, the rate of return on an investment in a company will converge to the ROE.

    In order to secure funding from investors and achieve favourable borrowing rates from lenders, a company is therefore incentivised to beef up its ROE. This can either be done by reducing the denominator of the ratio, or by increasing the numerator. Reducing equity either means running a more asset-light business or by increasing liabilities via the form of debt. This is why debt is also a form of leverage as it allows a company to earn money on outside capital. Increasing the numerator is simple on paper but harder in reality; increasing earnings without adding capacity, e.g. by cost reductions or price increases.

    Therefore ROE is a helpful performance metric for management and investors but it is not the ultimate goal. The goal of a for-profit company is to make money, i.e. maximise the earnings power. Would you rather own a company that earns 20% on a business with $100 of equity or 5% on  company with $1000 of tied up capital? Yes, the first company is more efficient at turning over a profit but that profit is considerably smaller than for the second company. Of course, if the first company has the chance to grow to the size of the second in a few years time, and maintains or even expands its ROE, then this is a completely different scenario and it would be a good investment to forego some earnings now for higher cashflow in the future. However, by and large, this is not the situation for large aircraft manufacturers such as Boeing and Airbus, and restricted to fast-growing companies in the startup world.

    Second, it is foolish to assume that the numerator and denominator are completely decoupled. In fact, in a manufacturing-intense industry such as aerospace, the two terms are closely linked and their behaviour is complex, i.e. their are too many cause-and-effect relationships for us to truly understand how a reduction in assets will effect earnings. Blindly reducing assets, without taking into account its effect on the rate and cost of production, can always be considered as a positive effect as it always increase ROE. In this manner, ROE can be misused as a false excuse for excessive outsourcing. Given the complex relationship in the aerospace industry between earnings and net assets, the real value of the ROE ratio is to provide a ballpark figure of how much extra money the company can earn in its present state with a source of incremental capital. Thus, if a company with multiple billions in revenue currently has an ROE of 20%, than it can expect to earn an extra 20% if it employs an incremental amount of further capital in the business, where the exact incremental amount is of course privy to interpretation.

    In summary, there is no guarantee that a reduction in assets will directly result in an increase in profits, and the ROE metric is easily misused to justify capital reductions and outsourcing, when in fact, it should be used as a ballpark figure to judge how much additional money can currently be made with more capital spending. Thus, ROE should only be used as a performance metric but never as the overall goal of the company.

    A cautionary word on efficiency

    In a similar manner to ROE, the headcount of a company is an indicator of efficiency. If the same amount of work can be done by fewer people, then the company is naturally operating more efficiently and hence should be more profitable. This is true to an extent but not in the limit. Most engineers will agree that in a perfect world, perfect efficiency is unattainable as a result of dissipating mechanisms (e.g. heat, friction, etc.). Hence, perfect efficiency can only be achieved when no work is done. By analogy, it is meaningless to chase ever-improving levels of efficiency if this comes at the cost of reduced sales. Therefore, in some instances it may be wise to employ extra labour capacity in non-core activities in order to maintain a highly skilled workforce that is able to react quickly to opportunities in the market place, even if this comes at the cost of reduced efficiency.

    So when is outsourcing a good idea?

    Outsourcing happens all over the world today. So there is obviously a lot of merit to the idea. However, as I have described above, decisions to outsource should not be made blindly on terms of shedding assets or reducing costs, and need to factored into the design process as early as possible. Outsourcing is a valuable tool in two circumstances:

    1. Access to better IP = Better engineering design
    2. Access to better facilities = More precise manufacturing

    First, certain components on modern aircraft have become so complex in their own right that it is not economical to design and manufacture these parts in-house. As a result, the whole operation is outsourced to a supplier that specialises in this particular product segment, and can deliver higher quality products than the prime manufacturer. The best example of this are jet engines, which today are built by companies like Rolls-Royce, General Electric and Pratt & Whitney, rather than Airbus and Boeing themselves.

    Second, contrary to popular belief, the major benefit of automation in manufacturing is not the elimination of jobs, but an increase in precision. Precision manufacturing prevents the incredibly costly duplication of work on out-of-tolerance parts further downstream in a manufacturing operation. Toyota, for example, understood very early on that in a low-cost operation, getting things right the first time around is key, and therefore anyone on the manufacturing floor has the authority to stop production and sort out problems as they arise. Therefore, access to automated precision facilities is crucial for aircraft manufacturers. However, for certain parts, a prime manufacturer may not be able to justify the high capital outlay for these machines as there is not enough capacity in-house for them to be utilised economically. Under these circumstances, it makes sense to outsource the work to an external company that can pool the work from a number of companies on their machines. This only makes sense if the supplier has sufficient capacity on its machines or is able to provide improved dimensional control, e.g. by providing design for assembly services to make the final product easier to assemble.

    Conclusion

    After this rather long exposition of the dangers of outsourcing in the aerospace industry, here are some of the key takeaways:

    1. Outsourcing should not be employed as a tool for cost reduction. More likely than not it will lead to extra labour and higher costs via increased transportation, rework and inventories for the prime manufacturer, and therefore this extra price should be compensated by better design engineering or better manufacturing precision than could be achieved in-house.
    2. Efficiency is not the primary goal of the operation, but can be used as a useful metric of performance. The goal of the operation is to make money.
    3. A basic level of work has to be retained in-house in order to generate sufficient cash to fund new products and maintain a highly skilled workforce. If the latter requires extra capacity, a diversification to non-core activities may be a better option than reducing headcount.
    4. Scale matters. Cost saving techniques for standardised high-volume production are typically inappropriate for low-volume industries like aerospace.
    5. Recognise the power of incentives. In-house employees typically have more “skin in the game” as risk-sharing partners ,and therefore produce more dependable work than contractors.

    Sources

    [1] L.J. Hart-Smith. Out-sourced profits – the cornerstone of successful subcontracting. Boeing paper MDC 00K0096. Presented at Boeing Third Annual Technical Excellence (TATE) Symposium, St. Louis, Missouri, 2001.

    [2] N.N. Taleb. How to legally own another person. Skin in the Game. pp. 10-15. https://dl.dropboxusercontent.com/u/50282823/employee.pdf

  • The Navier-Stokes Equation

    The name we use for our little blue planet “Earth” is rather misleading. Water makes up about 71% of Earth’s surface while the other 29% consists of continents and islands. In fact, this patchwork of blue and brown, earth and water, makes our planet very unlike any other planet we know to be orbiting other stars. The word “Earth” is related to our longtime worldview based on a time when we were constrained to travelling the solid parts of our planet. Not until the earliest seaworthy vessels, which were believed to have been used to settle Australia some 45,000 years ago, did humans venture onto the water.

    Not until the 19th century did humanity make a  strong effort to travel through another vast sea of fluid, the atmosphere around us. Early pioneers in China invented ornamental wooden birds and primitive gliders around 500 BC, and later developed small kites to spy on enemies from the air. In Europe, the discovery of hydrogen in the 17th century inspired intrepid pioneers to ascend into the lower altitudes of the atmosphere using rather explosive balloons, and in 1783 the brothers Joseph-Michel and Jacques-Étienne Montgolfier demonstrated a much safer alternative using hot-air balloons.

    The pace of progress accelerated dramatically around the late 19th century culminating in the first heavier-than-air flight by Orville and Wilbur Wright in 1903. Just 7 years later the German company DELAG invented the modern airline by offering commercial flights between Frankfurt and Düsseldorf using Zeppelins. After WWII commercial air travel shrunk the world due to the invention and proliferation of the jet engine. Until a series of catastrophic failures the DeHavilland Comet was the most widely-used aircraft but was then superseded in 1958 by one of the iconic aircrafts, the Boeing 707. Soon military aircraft began exploring the greater heights of our atmosphere with Yuri Gagarin making the first manned orbit of Earth in 1961, and Neil Armstrong and Buzz Aldrin walking on the moon in 1969, a mere 66 years after the first flight at Kittyhawk by the Wright brothers.

    Air and space travel has greatly altered our view of our planet, one from the solid, earthly connotations of “Earth” to the vibrant pictures of the blue and white globe we see from space. In fact the blue of the water and the white of the air allude to the two fluids humans have used as media to travel and populate our planet to a much greater extent than travel on solid ground would have ever allowed.

    Fundamental to the technological advancement of sea- and airfaring vehicles stood a physical understand of the media of travel, water and air. In water, the patterns of smooth and turbulent flow are readily visible and this first sparked the interest of scientists to characterise these flows. The fluid for flight, air, is not as easily visible and slightly more complicated to analyse. The fundamental difference between water and air is that the latter is compressible, i.e. the volume of a fixed container of air can be decreased at the expense of increasing the internal pressure, while water is not. Modifying the early equations of water to a compressible fluid initiated the scientific discipline of aerodynamics and helped to propel the “Age of Flight” off the ground.

    One of the groundbreaking treatises was Daniel Bernoulli’s Hydrodynamica published in 1738, which, upon other things, contained the statement many of us learn in school that fluids travel faster in areas of lower than higher pressure. This statement is often used to incorrectly explain why modern fixed-wing aircraft induce lift. According to this explanation the curved top surface of the wing forces air to flow quicker, thereby lowering the pressure and inducing lift. Alas, the situation is slightly more complicated than this. In simple terms, lift is induced by flow curvature as the centripetal forces in these curved flow fields create pressure gradients between the differently curved flows around the airfoil. As the flow-visualisation picture below shows, the streamlines on the top surface of the airfoil are most curved and this leads to a net suction pressure on the top surface. In fact, Bernoulli’s equation is not needed to explain the phenomenon of lift. For a more detailed explanation of why this is so I highly recommend the journal article on the topic by Dr. Babinsky from Cambridge University.

    Flow lines around an airfoil (Source: Wikimedia Commons https://en.wikipedia.org/wiki/File:Airfoil_with_flow.png)

    Just 20 years after Daniel Bernoulli’s treatise on incompressible fluid flow, Leonard Euler published his General Principles of the Movement of Fluids, which included the first example of a differential equation to model fluid flow. However, to derive this expression Euler had to make some simplifying assumptions about the fluid, particularly the condition of incompressibility, i.e. water-like rather than air-like properties, and zero viscosity, i.e. a fluid without any stickiness. While, this approach allowed Euler to find solutions for some idealised fluids, the equation is rather too simplistic to be of any use for most practical problems.

    A more realistic equation for fluid flow was derived by the French scientist Claude-Louis Navier and the Irish mathematician George Gabriel Stokes. By revoking the condition of inviscid flow initially assumed by Euler, these two scientists were able to derive a more general system of partial differential equations to describe the motion of a viscous fluid.

    [latex]\rho\left(\frac{\partial\boldsymbol{v}}{\partial t}+\boldsymbol{v}\cdot\nabla\boldsymbol{v}\right)=-\nabla p+\nabla\cdot\boldsymbol{T}+\boldsymbol{f}[/latex]

    The above equations are today known as the Navier-Stokes equations and are infamous in the engineering and scientific communities for being specifically difficult to solve. For example, to date it has not been shown that solutions always exist in a three-dimensional domain, and if this is the case that the solution in necessarily smooth and continuous. This problem is considered to be one of the seven most important open mathematical problems with a $1m prize for the first person to show a valid proof or counter-proof.

    Fundamentally the Navier-Stokes equations express Newton’s second law for fluid motion combined with the assumption that the internal stress within the fluid is equal to diffusive (“spreading out”) viscous term and the pressure of the fluid – hence it includes viscosity. However, the Navier-Stokes equations are best understood in terms of how the fluid velocity, given by [latex]\boldsymbol{v}[/latex] in the equation above, changes over time and location within the fluid flow. Thus, [latex]\boldsymbol{v}[/latex] is an example of a vector field as it expresses how the speed of the fluid and its direction change over a certain line (1D), area (2D) or volume (3D) and with time [latex]t[/latex].

    The other terms in the Navier-Stokes equations are the density of the fluid [latex]\rho[/latex], the pressure [latex]p[/latex], the frictional shear stresses [latex]\boldsymbol{T}[/latex], and body forces [latex]\boldsymbol{f}[/latex] which are forces that act throughout the entire body such as inertial and gravitational forces. The dot is the vector dot product and the nabla operator [latex]\nabla[/latex] is an operator from vector calculus used to describe the partial differential in three dimensions,

    [latex]\nabla = \left(\frac{\partial}{\partial x},\frac{\partial}{\partial y},\frac{\partial}{\partial z}\right)[/latex]

    In simple terms, the Navier-Stokes equations balance the rate of change of the velocity field in time and space multiplied by the mass density on the left hand side of the equation with pressure, frictional tractions and volumetric forces on the right hand side. As the rate of change of velocity is equal to acceleration the equations boil down to the fundamental conversation of momentum expressed by Newton’s second law.

    One of the reasons why the Navier-Stokes equation is so notoriously difficult to solve is due to the presence of the non-linear [latex]\boldsymbol{v}\cdot\nabla\boldsymbol{v}[/latex] term. Until the advent of scientific computing engineers, scientists and mathematicians could really only rely on very approximate solutions. In modern computational fluid dynamics (CFD) codes the equations are solved numerically, which would be prohibitively time-consuming if done by hand. However, in some complicated practical applications even this numerical approach can be become too complicated such that engineers have to rely on statistical methods to solve the equations.

    The complexity of the solutions should not come as a surprise to anyone given the numerous wave patterns, whirlpools, eddies, ripples and other fluid structures that are often observed in water. Such intricate flow patterns are critical for accurately modelling turbulent flow behaviour which occurs in any high velocity, low density flow field (strictly speaking, high Reynolds number flow) such as around aircraft surfaces.

    Nevertheless, as the above simulation shows, the Navier-Stokes equation has helped to revolutionise modern transport and also enabled many other technologies. CFD techniques that solve these equations have helped to improve flight stability and reduce drag in modern aircraft, make cars more aerodynamically efficient, and helped in the study of blood flow e.g. through the aorta. As seen in the linked video, fluid flow in the human body is especially tricky as the artery walls are elastic. Thus, such an analysis requires the coupling of fluid dynamics and elasticity theory of solids, known as aeroelasticity. Furthermore, CFD techniques are now widely used in the design of power stations and weather predictions.

    In the early days of aircraft design, engineers often relied on back-of-the-envelope calculations, intuition and trial and error. However, with the increasing size of aircraft, focus on reliability and economic constraints such techniques are now only used in preliminary design stages. These initial designs are then refined using more complex CFD techniques applied to the full aircraft and locally on critical components in the detail design stage. Equally, it is infeasible to use the more detailed CFD techniques throughout the entire design process due to the lengthy computational times required by these models.

    Physical wind tunnel experiments are currently indispensable for validating the results of CFD analyses. The combined effort of CFD and wind-tunnel tests was critical in the development of supersonic aircraft such as the Concorde. Sound travels via vibrations in the form of pressure waves and the longitudinal speed of these vibrations is given by the local speed of sound which is a function of the fluids density and temperature. At supersonic speeds the surrounding air molecules cannot “get out of the way” before the aircraft arrives and therefore air molecules bunch up in front of the aircraft. As a result, a high pressure shock wave forms in these areas that is characterised by an almost instantaneous change in fluid temperature, density and pressure across the shock wave. This abrupt change in fluid properties often leads to complicated turbulent flows and can induce unstable fluid/structure interactions that can adversely influence flight stability and damage the aircraft.

    The problem with performing wind-tunnel tests to validate CFD models of these phenomena is that they are expensive to run, especially when many model iterations are required. CFD techniques are comparably cheaper and more rapid but are based on idealised conditions. As a result, CFD programs that solve Navier-Stokes equations for simple and more complex geometries have become an integral part of modern aircraft design, and with increasing computing power and improved numerical techniques will only increase in importance over the coming years. In any case, the story of the Navier-Stokes equation is a typical example of how our quest to understand nature has provided engineers with a powerful new tool to design improved technologies to dramatically improve our quality of life.

    References

    If you’d like to know more about the Navier-Stokes equations or 16 other equations that have changed the world, I highly recommend you check out Ian Stewart’s book of the same name.

    Ian Stewart – In Pursuit of the Unknown: 17 Equations That Changed the World. Basic Books. 2013.

  • Engineering – A Manifesto

    “Engineering is not the handmaiden of physics any more than medicine is of biology”


    What is science? And how is it different from engineering? The two disciplines are closely related and the differences seem subtle at first, but science and engineering ultimately have different goals.

    A scientist attempts to gain knowledge about the underlying structure of the world using systematic observations and experimentation. Scientists are experts in dealing with doubt and uncertainty. As the great Richard Feynman pointed out: “When a scientist doesn’t know the answer to a problem, he is ignorant. When he has a hunch as to what the result is, he is uncertain. And when he is pretty darned sure of what the result is going to be, he is in some doubt” [1]. The body of science is a collection of statements of varying degrees of certainty, and in order to allow progress, scientists need to leave room for doubt. Without doubt and discussion there is no opportunity to explore the unknown or discover new insights about the structure and behaviour of the world.

    In the same manner, the role of the engineer is to explore the realm of the unknown by systematically searching for new solutions to practical problems. Engineering is less about knowing (or not knowing), and more about doing; it is about dreaming how the world could be, rather than studying how it is. Engineers rely on scientific knowledge to design, build and control hardware and software, and therefore apply scientific insights to devise creative solutions to practical problems.

    I bring up this seemingly superfluous topic because even seasoned journalists can confuse, perhaps unwillingly, the differences between the two endeavours. This article in the Guardian about the recent landing of Philae on Comet 67P refers to the great success of “scientists” on multiple occasions, but fails to give due credit to “engineers” by referring to their role only once. So, is landing a machine on an alien body hurtling through space a scientific or an engineering achievement?

    There is certainly no straightforward answer to this question. Both scientists and engineers were indispensable in the success of the Rosetta program. However, in paying credit to the fantastic achievement of engineers involved in this space endeavour, I will leave you with this brief letter by three University of Bristol professors, that so poetically captures the essence of engineering:

    Landing Philae on Comet 67P from the Rosetta probe is a fantastic achievement (One giant heartstopper, 14 November). A tremendous scientific experiment based on wonderful engineering. Engineering is the turning of a dream into a reality. So please give credit where credit is due – to the engineers. The success of the science is yet to be determined, depending on what we find out about the comet. Engineering is not the handmaiden of physics any more than medicine is of biology – all are of equal importance to our futures.

    – Emeritus professor David Blockley, Professor Stuart BurgessProfessor Paul Weaver, University of Bristol

     

    References

    [1] “What Do You Care What Other People Think?: Further Adventures of a Curious Character” by Richard P. Feynman. Copyright (c)1988 by Gweneth Feynman and Ralph Leighton.

  • The Airline Metro System

    When I was travelling in Chile a short while ago I took a flight from the capital Santiago de Chile to the city of Calama in the Atacama dessert. What was interesting about this flight, was that on its way to Calama the airplane landed for a short stop in Copiapó. Immediately after leaving the runway the doors opened, a couple of people got off and were immediately replaced by others already waiting on the tarmac. I had never seen this metro-style system of operating an airline before and was surprised how efficient this system was being implemented. I was also struck by the albeit ludicrous idea of operating an air-bus (no pun intended) style fixed travel route between major European cities, say London-Paris-Madrid-Rome-Vienna-Berlin-London, with people hopping on and off at their pleasure. How cool would that be?

    I understand that the fixed costs of this system would be relatively high, and making any money on the tight margins that airliners are operating on would be incredibly tough. However, research is currently ongoing to realise a similar system for long distance travel. One possibility is exploiting the concept of air-to-air refuelling that has been used by the military and the Air Force One for many years. A collaborative European study Research on a Cruiser-Enabled Air Transport Environment (Recreate) has been running simulations at the National Aerospace Laboratory (NLR) in Amsterdam since 2011. The aim of these simulations is to investigate the technical challenges and potential savings of refuelling airliners in midair.

    Leading Boeing 707 refuelling a trailing 747 using a rearward extended boom
    Leading Boeing 707 refuelling a trailing 747 using a rearward extended boom

    This may sound like a fanciful notion but given that airlines have to cut the 2005 carbon emissions in half by 2050 it well worth looking into these radical ideas. In fact, preliminary results of the study show that fuel burn could be reduced by 11% to 23% if airliners could be refuelled by tanker planes. Passenger safety being paramount in civil aircraft the military concepts currently in use will have to be adapted to meet the required reliability standards. In military environments the tanker flies ahead of the aircraft and supplies fuel through a boom from above. To reduce the likelihood of collisions a forward extending boom refuelling from the bottom is the solution preferred by the researchers. In this manner the civil aircraft does not fly in the wake of the tanker, which could affect turbulence and passenger comfort. Furthermore, the responsibility and training remains with the tanker pilots who have better visibility of the refuelling process when flying from the rear.

    The researchers also intend to take the concept one step further by exchanging cargo and passengers in midair, thus getting closer to the idea of an airline metro system. This research envisions a new type of large cruising airliner that is fed by much smaller feeder planes. In this scenario, the larger cruisers fly fixed routes over large distances, while the smaller feeders exchange passengers, crew and cargo with the cruiser in midair. One major challenge with the scheme is that the cruiser aircraft will require an incredible durable engine with low fuel consumption. Such a system does not seem to be economically feasible using current chemically fuelled jet engines. The greater amounts of fuel to be stored has to be offset by a larger engine and airframe, which naturally increases the loads on components in turn requiring thicker sections and structures. Thus, with current gas-fuelled engines you are very much caught in the downward payload spiral that is so frustrating in rocketry.

    But what if the cruisers are propelled by nuclear engines? Well the efficiency of the system improves significantly. In fact the efficiency gains are so great that a large cruiser could fly continuously for a whole year just on a few litres of gasoline. Powered by nuclear fusion a cruiser could stay airborne for months, and passengers could hop on and off a continuously airborne global fleet of international airlines.

    And it turns out that in October 2014 Lockheed Martin’s Skunk Works announced that they could have a prototype fusion reactor ready within five years and a working production engine within ten.  The obvious “buts” are that that a fusion process requires temperatures in the millions of degrees in order to separate ions from electrons which creates hot plasma in the process. In fusion the danger is not a nuclear fallout as is the case in fission. The problem with fission engines is that they require shielding to protect passengers and also carry the dangers of spreading radioactive material in the event of a crash. In a fusion engine the difficulty is in stabilising the plasma and safely containing it in the reactor to guarantee the fusion of ions. The Skunk Works are currently working on an eloctro-magnetic suspender system to guarantee a stable reaction. Furthermore, neutrons that are emitted in the fusion process can damage the materials in the containing structure and turn them radioactive. Thus materials that minimise this radioactivity are needed. Finally, the fusion reactors need to be miniaturised from the scale of family houses to something more akin of an SUV. In that event fusion reactors will also become an interesting propulsion method for spaceships and other spacecraft that have limited space for power generation.

    While this is all science fiction for now it presents an interesting option for facilitating a global metro-style airline system. And how cool would that be?

  • Human Fallibility in Aviation II: Case Study

    Vanity Fair recently featured an excellent article on Air France Flight 447 that crashed into the Atlantic in 2009. It is a long read, but if you have 30 min to spare it will be a great educational investment.

    The author, William Langewiesche, does a good job at weaving multiple aspects of aeronautics, such as cockpit design, ergonomics, the physics of flight and pilot training, into a story that is ultimately about the role of human fallibility in a system that is governed by automation. This is a topic that I find highly fascinating and will only become more pertinent in the future as computers take over increasing number of tasks in the cockpit. In fact, the psychological impact on the pilots and the effect of automation on the piloting profession on a whole remain uncertain.

    The article features extensive coverage of the pilots’ conversation and provides a riveting account of what transpired in the cockpit prior to the crash. In this way the article brings to light some of the human misjudgements that ultimately led to the catastrophe. On some occasions I found myself cringing at the incredulity of the events that transpired, futilely hoping that the pilots would turn the situation around and save the 228 passengers onboard, while fully aware that hindsight makes all mistakes appear tauntingly clear.

    The reason for the plane crash was a classic case of aerodynamic stall brought on by the pilot climbing too quickly and exceeding the critical angle of attack, depending on the operating conditions in the range of 13-16°. Even when the angle of attack was at an incredible 41°, the aircraft was rolling from side to side, the alarm system was screaming “STALL”, the cockpit was shaking violently due to the turbulent flow separation over the wings and the aircraft was losing altitude at a rate of 4,000 feet per minute, each one a tale-tell signs of aerodynamic stall, the pilots did not know what was happening with the airplane!

    What brought the aircraft into this situation in the first place? The pitot static tube used as sensors for the flight speed had been clogged by a hail storm, which automatically took the fly-by-wire system out of the auto-pilot, disabled the automatic stall recovery system and returned the controls back to the pilots. At this point had the pilots continued the modus operandi of keeping the aircraft at the same altitude with the engines at constant thrust, nothing would have happened. It is ironic, that the only thing the pilots needed to do to keep the plane safely in the air was nothing. It is unclear why one of the pilots decided to climb to a higher altitude and especially why this was done so rapidly, but this ultimately triggered the aerodynamic stall of the wings.

    William Langewiesche argues that increasing automation “de-skills” pilots, essentially rendering them incapable of flying an aircraft without support systems. I find the following section especially interesting:

    “For commercial-jet designers, there are some immutable facts of life. It is crucial that your airplanes be flown safely and as cheaply as possible within the constraints of wind and weather. Once the questions of aircraft performance and reliability have been resolved, you are left to face the most difficult thing, which is the actions of pilots. There are more than 300,000 commercial-airline pilots in the world, of every culture. They work for hundreds of airlines in the privacy of cockpits, where their behavior is difficult to monitor. Some of the pilots are superb, but most are average, and a few are simply bad. To make matters worse, with the exception of the best, all of them think they are better than they are. Airbus has made extensive studies that show this to be true.”

    So how has this been dealt with in the past?

    “First, you put the Clipper Skipper [daring WW II fighter pilots] out to pasture, because he has the unilateral power to screw things up. You replace him with a teamwork concept—call it Crew Resource Management—that encourages checks and balances and requires pilots to take turns at flying. Now it takes two to screw things up. Next you automate the component systems so they require minimal human intervention, and you integrate them into a self-monitoring robotic whole. You throw in buckets of redundancy. You add flight management computers into which flight paths can be programmed on the ground, and you link them to autopilots capable of handling the airplane from the takeoff through the rollout after landing. You design deeply considered minimalistic cockpits that encourage teamwork by their very nature, offer excellent ergonomics, and are built around displays that avoid showing extraneous information but provide alerts and status reports when the systems sense they are necessary. Finally, you add fly-by-wire control. At that point, after years of work and billions of dollars in development costs, you have arrived in the present time. As intended, the autonomy of pilots has been severely restricted, but the new airplanes deliver smoother, more accurate, and more efficient rides—and safer ones too.”

    This essentially causes a shift in the piloting profession…

    “In the privacy of the cockpit and beyond public view, pilots have been relegated to mundane roles as system managers, expected to monitor the computers and sometimes to enter data via keyboards, but to keep their hands off the controls, and to intervene only in the rare event of a failure. As a result, the routine performance of inadequate pilots has been elevated to that of average pilots, and average pilots don’t count for much[…]Once you put pilots on automation, their manual abilities degrade and their flight-path awareness is dulled: flying becomes a monitoring task, an abstraction on a screen, a mind-numbing wait for the next hotel.[…] For all three [pilots on Air France Flight 447], most of their experience had consisted of sitting in a cockpit seat and watching the machine work.”

    We all know that automation is indispensable going forward. It is too valuable a system and has made aviation the safe mode of transport it is today. However, the issues raised above will need to be addressed within the near future. Possible solutions may be requiring pilots to turn off auto-pilot for a certain number of flights, while another approach may be to improve the machine-human interaction in the cockpit. In either case, I think it is important to point out that catastrophes such as Air France Flight 447 are outliers, black swans, six-sigma events that are not likely to repeat again in the same detail. In fact, the roots of the next catastrophe may lie somewhere completely different and thus are impossible to predict.

    References

    [1] William Langewiesche, “The Human Factor”, Vanity Fair, October 2014. http://www.vanityfair.com/business/2014/10/air-france-flight-447-crash

  • Loads Acting on Aircraft

    The flight envelope of an aeroplane can be divided into two regimes. The first is rectilinear flight in a straight line, i.e. the aircraft does not accelerate normal to the direction of flight. The second is curvilinear flight, which, as the name suggests, involves flight in a curved path with acceleration normal to tangential flight path. Curvilinear flight is often known as manoeuvring and is of greater importance for structural design since the aerodynamic and inertial loads are much higher than in rectilinear flight.

    As the aircraft moves relative to the surrounding fluid a pressure field is set up over the entire aircraft, and not only over the wings, that acts to keep the aircraft afloat. This aerodynamic pressure always acts normal to the outer contour of the skin but the resultant force can be resolved into two forces acting tangential and normal to the direction of flight. The sum of the forces normal to the direction of flight give rise to the lift force L, which offsets the weight of the aircraft i.e. offsets the weight of the aircraft W. The tangential components give the resultant drag force D, which in powered flight must be overcome by the propulsive force F. The resultant force F includes the thrust generated by the engines, the induced drag of the propulsive system and the inclination of the line of thrust to the direction of flight. In basic mechanics the aircraft is simplified into a point coincident with the centre of gravity (CG) of the aircraft with all forces assumed to act through the centre of gravity. If the net resultant of a force is offset from the CG then a resultant moment will also act on the aircraft. For example, the lift generated by the wings is generally offset from the centre of gravity of the aircraft and may thus produce a net pitching moment that has to be offset by the control surfaces. Figure 1 below shows as a simplified free body diagram of an aircraft in level flight, climb and descent.

    Figure 1. Free body diagram of aircraft in flight (1)
    Fig. 1. Free body diagram of aircraft in flight (1)

    Note that the lift is only equal and opposite to the weight in steady and level flight, thus:

    [latex] F = D [/latex] and [latex] L = W [/latex]

    In steady descent and steady climb the lift component is less than the weight, since only a component of the weight acts normal to the direction of flight and because by definition lift is always normal to both drag and thrust. Also in climbing the thrust must be greater than the drag to overcome the component of weight acting against the direction of flight and vice versa in descent. Thus in a climb:

    [latex] L = W \cos \gamma_c [/latex] and [latex] F = D + W \sin \gamma_c [/latex]

    and in descent

    [latex ]L = W \cos \gamma_d [/latex], [latex] F = D – W \sin \gamma_d [/latex]

    This situation is schematically represented in Figure 1 by the relative sizes of the different arrows. In general we can imagine the weight being balanced by the lift force L and the difference between the thrust F and the drag D.  A bit of manipulation of the two equations for climb or descent above gives the same expression,

    [latex] L^2 + (F-D)^2 = W^2 \cos^2 \gamma_c + W^2 \sin^2 \gamma_c [/latex]

    such that,

    [latex] W = \sqrt{L^2 + (F-D)^2} [/latex]

    The latter expression is clearly obtained if Pythagoras’ rule is applied to the vector triangles that include (F-D) and L in Figure 1.

    Figure 1 also shows velocity diagrams depicting the relationship between true air speed V, tangential to the direction of flight, and the rates of climb and descent [latex]v_c[/latex] and [latex] v_d[/latex] respectively. We can combine these velocity triangles with the forces triangles to obtain simple equations for the rates of climb and descent,

    [latex] \sin \gamma_c = \frac{F-D}{W} [/latex] and [latex] \sin \gamma_c = \frac{v_c \ or \ v_d}{V} [/latex]

    such that [latex] v_c [/latex] or [latex] v_d = \frac{F-D}{W} V [/latex].

    This expression can also be used to gain some insight into the driving factors behind gliding flight. In this case the net propulsive force F is zero such that the expression becomes,

    [latex] v_d = -\frac{D}{W} V [/latex] which may be approximated to [latex] v_d = -\frac{D}{L} V [/latex] since the angle of descent in gliding is typically very shallow. Therefore the gliding efficiency of a sailplane depends on maximising the lift to drag ratio L/D. If the ascending thermals are equal to or greater than this rate of descent than the glider can continuously maintain or even gain in altitude.

    An aircraft may of course increase its speed along the direction of rectilinear flight in which case the thrust force F must be greater than the vector sum of the drag and the component of the weight. A more interesting scenario are accelerated flight where the acceleration occurs as a result in change in direction rather than a change in speed. By definition, in vector mechanics a change in direction is a change in velocity and therefore defined as acceleration, even if the magnitude of the speed does not change. A change in the flight path is achieved by changing the magnitude of the overall lift component or by differences in lift between the two wings, away from the equilibrium condition depicted in Figure 1. This change can either be obtained by a change in true airspeed or by changing the angle of attack of the wings relative to the airflow. Consider the simple banked turn in Figure 2 below.

    Fig. 2. Free Body Diagram of an aircraft in a banked turn (1)
    Fig. 2. Free Body Diagram of an aircraft in a banked turn (1)

    As the aircraft banks the lift force normal to the wings is turned through an angle [latex] \theta [/latex] from the vertical weight vector. Since the centripetal acceleration acts horizontally and the weight acts vertically we can use simple trigonometric relations to find the radius of turn:

    [latex] \tan \theta = \frac{F_{centripetal}}{W} = \frac{m V^2 / R }{m g} [/latex] such that [latex] R = \frac{V^2}{g \tan \theta} [/latex]. It is also obvious that the more steeply banked the turn the more lift will be required from the wings since,

    [latex] L = \frac{W}{\cos \theta}[/latex]

    such that increase in engine power is needed to maintain constant speed under this flight condition. This is one of the reasons why fighter jets that require manoeuvres with very tight radii have such short and stubby wings. Small radii if turn R and thus high banking angles [latex] \theta [/latex] require increases in lift and therefore increase the bending moments acting on the wings.

    In reality the airplane is subjected to a large variety of different combinations of accelerations (rolls, pull-ups, push-overs, spinning, stalling , gusts etc.) at different velocities and altitudes. In classical mechanics free fall is expressed as having an acceleration 0f -1g and level flight is denoted as 0g. The aeronautical engineer differs from this convention in order to make the comparison between lift and weight simpler. This means that free fall is denoted by 0g and level flight by 1g. The ratio between lift and aircraft weight is called the load factor n, where [latex] n = \frac{L}{W} [/latex], i.e. n = 0 for free fall, n = 1 for level flight, n > 1 to pull out of a dive and n < 1 to pull out of a climb. The overall load spectrum of an aircraft is captured graphically by so called velocity – load factor (V-n) curves. The outline of these diagrams are given by the possible combinations of load factor and velocity than an aircraft will be expected to cope with. For example Figure 3a shows the basic V-n diagram for symmetric flight (asymmetric envelopes exist for rolls etc. but are not covered here).

    Fig. 3 The basic manoeuvre and gust flight envelopes (1)
    Fig. 2 The a) basic manoeuvre and b) gust flight envelopes (1)

    The envelope is constructed from the positive and negative stall lines which indicate, respectively, the maximum and minimum load that can be achieved because of the inability of the aircraft to produce any more lift. Thus,

    [latex] L = n W = \frac{1}{2}C_{L_{max}} \rho V^2 S [/latex]

    where [latex] \rho [/latex] is the density of the surrounding air and [latex] S [/latex] is the wing surface area. The limiting factor [latex] n_l [/latex] also known as the maximum expected service load is defined by

    [latex] n_l = 2.1 + \frac{24 000}{W + 10 000} [/latex] or 2.5, whichever is greater, with W the max take-off weight.

    [latex] V_A [/latex], [latex] V_C [/latex] and [latex] V_D [/latex]  are defined as the maximum manoeuvre speed ( the speed above which it is unwise to make full application of any single flight control), the design cruise speed and the maximum dive speed, respectively. The intersection between the horizontal line [latex] n = 1 [/latex] and the left curve of the envelope is also of special significance since it represents the stall speed at level flight. In general the limit load factor must be tolerable without detrimental permanent deformation. The aircraft must also support an ultimate load (=limit load x safety factor) for at least 3 seconds. The safety factor is generally taken to be 1.5.

    Finally, Figure 3b shows a typical gust envelope. A gust alters the angle of attack of the lifting surfaces by an amount equal to [latex] \tan^{-1} (w/V) [/latex] where w is the vertical gust velocity. Since the lift scales with the angle of attack up to the point of aerodynamic stall, the inertia forces applied to structure are altered by the gust winds. The gust envelope is constructed with the same stall lines as the basic manoeuvre envelope and different gust lines are drawn radiating from n = 1 at V = 0. Note that the design gust intensities reduce as the velocity increases, with the intention that the aircraft is flown accordingly. In the gust envelope [latex] V_A [/latex] is replaced with [latex] V_B [/latex], representing the design speed at maximum gust intensity.

     

    References

    (1) Stinton, D. The Anatomy of the Airplane. 2nd Edition. Blackwell Science Ltd. (1998).