Tag: Aerospace

  • The Dangers of Outsourcing

    “Outsourcing” is a loaded term. In today’s globalised world it has become to mean many things – from using technology to outsource rote work over the internet to sharing capacity with external partners that are more specialised to complete a certain task. However, inherent in the idea of outsourcing is the promise of reduced costs, either through reductions in labour costs, or via savings in overheads and tied-up capital.

    I recently stumbled across a 2001 paper [1] by Dr Hart-Smith of the Boeing Company, discussing some of the dangers and fallacies in our thinking regarding the potential advantages of outsourcing. The points raised by Hart-Smith are particularly noteworthy as they deal with the fundamental goals of running a business rather than trying to argue by analogy, or blind faith on proxy measurements. What follows is my take on the issue of outsourcing as it pertains to the aerospace industry only, loosely based on the insights provided by Dr Hart-Smith, and with some of my own understanding of the topic from disparate sources that I believe are pertinent to the discussion.

    That being said, the circumstances under which outsourcing makes economical sense depends on a broad spectrum of variables and is therefore highly complex. If you feel that my thinking is misconstrued in any way, please feel free to get in touch. With that being said let’s delve a bit deeper into the good, the bad and the ugly of the outsourcing world.

    Any discussion on outsourcing can, in my opinion, be boiled down to two fundamental drivers:

    1. The primary goal of running a business: making money. Taking non-profits aside, a business exists to make a profit for its shareholders. If a business doesn’t make any money today, or isn’t expected to make a profit in the future, i.e. is not valuable on a net present value basis, then it is a lousy business. Any other metric that is used to measure the performance of a business, be it efficiency ratios such as return on capital employed, are helpful proxies but not the ultimate goal.
    2. Outsourcing is based on Ricardo’s idea of comparative advantage, that is, if two parties decide to specialise in the production of two different goods and decide to trade, both parties are better off than if they produced both goods for autarchic use only, even if one party is more efficient than the other at producing both goods at the same time.

    Using these two points as our guidelines it becomes clear very quickly under what conditions a company should decide to outsource a certain part of its business:

    • Another company is more specialised in this line of business and can therefore create a higher-quality product. This can either be achieved via:
      • Better manufacturing facilities, i.e. more precisely dimensioned components that save money in the final assembly process
      • Superior technical expertise. A good example are the jet engines on an aircraft. Neither Boeing nor Airbus design or manufacture their own engines as the complexity of this particular product means that other companies have specialised to make a great product in this arena.
    • The rare occasion that outsourcing a particular component of an aircraft results in a net overall profit for the entire design and manufacturing project. However, the decision to outsource should never be based on the notion of reduced costs for a single component, as there is no one-to-one causation between reducing costs for a single component and  increased profits for the whole project.

    Note, that in either case the focus is on receiving extra value for something the company pays for rather than on reducing costs. In fact, as I will explain below, outsourcing often leads to increases in cost, rather than cost reductions. Under these circumstances, it only makes sense to outsource if this additional cost is traded for extra value that cannot be created in house, i.e. manufacturing value or technical value.

    Reducing Costs

    Reducing costs is another buzzword that is often used to argue pro outsourcing. Considering the apparent first-order effects, it makes intuitive sense that offloading a certain segment of a business to a third party will reduce costs via lower labour costs and overheads, depreciation and capital outlays. In fact, this is one of the allures of the globalised world and the internet; the means of outsourcing work to lower-wage countries are cheaper than ever before in history.

    However, the second-order effects of outsourcing are rarely considered. The first fundamental rule of ecology is that in a complex system you can never only do one thing. As all parts of a complex system are intricately linked, perturbing the system in one area will have inevitable knock-on effects in another area. Additionally if the system behaves non-linearly to the external stimuli, these knock-on effects are non-intuitive and almost impossible to predict a priori. Outsourcing an entire segment of a project should probably be classed as a major perturbation, and as all components of a complex engineering product, such as an aircraft, are inherently linked, a decision in one area will certainly effect other areas of the project as well. Hence, consider the following second-order effects that should be accounted for as a result of outsourcing as certain line of a business:

    • Quality assurance is harder out-of-house, and hence reworking components that are not to spec may cost more in the long run.
    • Additional labour may be required in-house in order to coordinate the outsourced work, interact with the third party and interface the outsourced component with the in-house assembly team.
    • Concurrent engineering and the ability to adapt designs is much harder. In order to reduce their costs, subcontractors often operate on fixed contracts, i.e. the design specification for a component is fixed or the part to be manufactured can not be changed. Hence, the flexibility to adapt the design of a part further down the line is constricted, and this constraint may create a bottleneck for other interfacing components.
    • Costs associated with subassemblies that cannot be fitted together balloon quickly, and the ensuing rework and detective work to find the source of the imprecision delays the project.
    • There is a need for additional transportation due to off-site production and increased manufacturing time.
    • It is harder to coordinate the manufacturing schedules of multiple external subcontractors who might all be employing different planning systems, and more inventory is usually created.

    Therefore there is an inherent clash between trying to minimise costs locally, i.e. the costs for one component in isolation, and keeping costs down globally, i.e. for the entire project. In the domain of complex systems, local optimisation can lead to fragility of the system in two ways. First, small perturbations from local optima typically have greater effects on the overall performance of the system than perturbations from locally sub-optimal states. Second, locally optimising one factor of the system may force other factors to be far from their optima, and hence reduce the overall performance of the system. A general heuristic is that the best solution is to reach a compromise by operating individual components  at sub-optimal levels, i.e. with excess capacity, such that the overall system is robust to adapt to unforeseen perturbations in its operating state.

    Furthermore, the decision to outsource the design or the manufacture of a specific component needs to factored into the overall design of the product as a early as possible. Thus, all interfacing assemblies and sub-assemblies are designed with this particular reality in mind, rather than having to adapt to this situation a posteriori. This is because early design decisions have the highest impact on the final cost of a product. As a general rule of thumb, 80% of the final costs are incurred by the first 20% of the design decisions made, such that late design changes are always exponentially more expensive than earlier ones. Having to fix misaligned sub-assemblies at final assembly costs orders of magnitude more than additional planning up front.

    Finally, the theory of constraints teaches us that the performance of the overall project can never exceed that of its least proficient component. Hence, the overall quality of the final assembly is driven by the quality of its worst suppliers. This means that in order to minimise any problems, the outsourcing company needs to provide extra quality and technical support for the subcontractors, extra employees for supply chain management, and additional in-house personal to deal with the extra detail design work and project management. Dr Hart-Smith warns that

    With all this extra work the reality is that outsourcing should be considered as an extra cost rather than a cost saving, albeit, if done correctly, for the exchange of higher quality parts. The dollar value of out-sourced work is a very poor surrogate for internal cost savings.

    Outsourcing Profits

    Hypothetically, in the extreme case when every bit of design and manufacturing work is outsourced the only remaining role f0r the original equipment manufacturer (OEM) of the aircraft is to serve as a systems integrator. However, in this scenario, all profits are outsourced as well. This reality is illustrated by a simple example. The engines and avionics comprise about 50% of the total cost of construction of an aircraft, and the remaining 50% are at the OEM’s discretion. Would you rather earn a 25% profit margin on 5% of the total work, or rather 5% profit margin on 25% of the total work? In the former case the OEM will look much more profitable on paper (higher margin) but the total amount of cash earned in the second scenario will be higher. Hence, in a world where 50% of the work naturally flows to subcontractors supplying the engines, avionics and control systems, there isn’t much left of the aircraft to outsource if enough cash is to be made to keep the company in business. Without cash there is no money to pay engineers to design new aircraft and no cash on hand to serve as a temporary buffer in a downturn. If there is anything that the 20th century has taught us, is that in the world of high-tech, any company that does not innovate and purely relies on derivative products is doomed to be disrupted by a new player.

    Second, subcontractors are under exactly the same pressure as the OEM to maximise their profits. In fact, subcontractors have a greater incentive for fatter margins and higher returns on investment as their smaller size increases their interest rates for loaned capital. This means that suppliers are not necessarily incentivised to manufacture tooling that can be reused for future products as these require more design time and can not be billed against future products. In-house production is much more likely to lead to this type of engineering foresight. Consider the production of a part that is estimated to cost the same to produce in-house as by a subcontractor, and to the same quality standards. The higher profit margins of the subcontractor naturally result in a higher overall price for the component than if manufactured in-house. However, standard accounting procedures would consider this as a cost reduction since all first-order costs, such as lower labour rate at the subcontractor, fewer employees and less capital tied up in hard assets at the OEM, creates the illusion that outside work is cheaper than in-house work.

    Skin in the Game

    One of the heavily outsourced planes in aerospace history was the Douglas Aircraft Company DC-10, and it was the suppliers who made all the profits on this plane. It is instrumental that most subcontractors were not willing to be classified as risk-sharing partners. In fact, if the contracts have been negotiated properly, then most subcontractors have very little downside risk.  For financial reasons, the systems integrator can rarely allow a subcontractor to fail, and therefore provides free technical support to the subcontractor in case of technical problems. In extreme cases, the OEM is even likely to buy if subcontractor outright.

    This state of little downside risk is what NN Taleb calls the absence of “skin in the game” [2]. Subcontractors typically do not behave like employees do. Employees or “risk-sharing” partners have a reputation to protect and fear the economic repercussions of losing their paychecks. On the one hand, employees are more expensive than contractors and limit workforce flexibility. On the other hand, employees guarantee a certain dependability and reliability for solid work, i.e. downside protection to shoddy work. In Taleb’s words,

    So employees exist because they have significant skin in the game – and the risk is shared with them, enough risk for it to be a deterrent and a penalty for acts of undependability, such as failing to show up on time. You are buying dependability.

    Subcontractors on the other hand typically have more freedom than employees. They fear the law more than being fired. Financial repercussions can be built into contracts, and bad performances may lead to loss in reputation, but an employee, by being part of the organisation and giving up some of his freedom, will always have more risk, and therefore behave in more dependable ways. There are examples, like Toyota’s ecosystem of subcontractors, where mutual trust and “skin in the game” is built into the network via well thought-out profit sharing, risk sharing and financial penalties, but these relationships are not ad hoc and are based on long-term relationships.

    With a whole network of subcontractors the performance of an operation is limited by the worst-performing segment. In this environment, OEMs are often forced to assist bad-performing suppliers and therefore forced to accept additional costs. Again from NN Taleb [2],

    If you miss on a step in a process, often the entire business shuts down – which explains why today, in a supposedly more efficient world with lower inventories and more subcontractors, things appear to run smoothly and efficiently, but errors are costlier and delays are considerably longer than in the past. One single delay in the chain can stop the entire process.

    The crux of the problem is that a systems integrator, who is the one that actually sells the final product, i.e. gets paid last and carries the most tail risk, can only raise the price to levels that the market will sustain. Subcontractors, on the other hand, can push for higher margins and lock in a profit before the final plane is sold and thereby limit their exposure to cost over-runs.

    ROE

    The return on net assets or return on equity (ROE) metric is a very powerful proxy to measuring how efficiently a company uses its equity or net assets (assets – liabilities; where assets are everything the company owns and liabilities include everything the company owes) to create profit,

    [latex] ROE = \frac{Earnings}{Equity}. [/latex]

    The difference between high-ROE and low-ROE businesses is illustrated here using a mining company and a software company as (oversimplified) examples. The mining company needs a lot of physical hard assets to dig metals out of the ground, and hence ties up considerable amount of capital in its operations. A software company on the other hand is asset-light as the cost of computing hardware has exponentially fallen in line with Morse Law. Thus, if both companies make the same amount of profit, then the software company will have achieved this more efficiently than the mining company, i.e. required less initial capital to create the same amount of earnings. The ROE is a useful metric for investors, as it provides information regarding the expected rate of return on their investment. Indeed, in the long run, the rate of return on an investment in a company will converge to the ROE.

    In order to secure funding from investors and achieve favourable borrowing rates from lenders, a company is therefore incentivised to beef up its ROE. This can either be done by reducing the denominator of the ratio, or by increasing the numerator. Reducing equity either means running a more asset-light business or by increasing liabilities via the form of debt. This is why debt is also a form of leverage as it allows a company to earn money on outside capital. Increasing the numerator is simple on paper but harder in reality; increasing earnings without adding capacity, e.g. by cost reductions or price increases.

    Therefore ROE is a helpful performance metric for management and investors but it is not the ultimate goal. The goal of a for-profit company is to make money, i.e. maximise the earnings power. Would you rather own a company that earns 20% on a business with $100 of equity or 5% on  company with $1000 of tied up capital? Yes, the first company is more efficient at turning over a profit but that profit is considerably smaller than for the second company. Of course, if the first company has the chance to grow to the size of the second in a few years time, and maintains or even expands its ROE, then this is a completely different scenario and it would be a good investment to forego some earnings now for higher cashflow in the future. However, by and large, this is not the situation for large aircraft manufacturers such as Boeing and Airbus, and restricted to fast-growing companies in the startup world.

    Second, it is foolish to assume that the numerator and denominator are completely decoupled. In fact, in a manufacturing-intense industry such as aerospace, the two terms are closely linked and their behaviour is complex, i.e. their are too many cause-and-effect relationships for us to truly understand how a reduction in assets will effect earnings. Blindly reducing assets, without taking into account its effect on the rate and cost of production, can always be considered as a positive effect as it always increase ROE. In this manner, ROE can be misused as a false excuse for excessive outsourcing. Given the complex relationship in the aerospace industry between earnings and net assets, the real value of the ROE ratio is to provide a ballpark figure of how much extra money the company can earn in its present state with a source of incremental capital. Thus, if a company with multiple billions in revenue currently has an ROE of 20%, than it can expect to earn an extra 20% if it employs an incremental amount of further capital in the business, where the exact incremental amount is of course privy to interpretation.

    In summary, there is no guarantee that a reduction in assets will directly result in an increase in profits, and the ROE metric is easily misused to justify capital reductions and outsourcing, when in fact, it should be used as a ballpark figure to judge how much additional money can currently be made with more capital spending. Thus, ROE should only be used as a performance metric but never as the overall goal of the company.

    A cautionary word on efficiency

    In a similar manner to ROE, the headcount of a company is an indicator of efficiency. If the same amount of work can be done by fewer people, then the company is naturally operating more efficiently and hence should be more profitable. This is true to an extent but not in the limit. Most engineers will agree that in a perfect world, perfect efficiency is unattainable as a result of dissipating mechanisms (e.g. heat, friction, etc.). Hence, perfect efficiency can only be achieved when no work is done. By analogy, it is meaningless to chase ever-improving levels of efficiency if this comes at the cost of reduced sales. Therefore, in some instances it may be wise to employ extra labour capacity in non-core activities in order to maintain a highly skilled workforce that is able to react quickly to opportunities in the market place, even if this comes at the cost of reduced efficiency.

    So when is outsourcing a good idea?

    Outsourcing happens all over the world today. So there is obviously a lot of merit to the idea. However, as I have described above, decisions to outsource should not be made blindly on terms of shedding assets or reducing costs, and need to factored into the design process as early as possible. Outsourcing is a valuable tool in two circumstances:

    1. Access to better IP = Better engineering design
    2. Access to better facilities = More precise manufacturing

    First, certain components on modern aircraft have become so complex in their own right that it is not economical to design and manufacture these parts in-house. As a result, the whole operation is outsourced to a supplier that specialises in this particular product segment, and can deliver higher quality products than the prime manufacturer. The best example of this are jet engines, which today are built by companies like Rolls-Royce, General Electric and Pratt & Whitney, rather than Airbus and Boeing themselves.

    Second, contrary to popular belief, the major benefit of automation in manufacturing is not the elimination of jobs, but an increase in precision. Precision manufacturing prevents the incredibly costly duplication of work on out-of-tolerance parts further downstream in a manufacturing operation. Toyota, for example, understood very early on that in a low-cost operation, getting things right the first time around is key, and therefore anyone on the manufacturing floor has the authority to stop production and sort out problems as they arise. Therefore, access to automated precision facilities is crucial for aircraft manufacturers. However, for certain parts, a prime manufacturer may not be able to justify the high capital outlay for these machines as there is not enough capacity in-house for them to be utilised economically. Under these circumstances, it makes sense to outsource the work to an external company that can pool the work from a number of companies on their machines. This only makes sense if the supplier has sufficient capacity on its machines or is able to provide improved dimensional control, e.g. by providing design for assembly services to make the final product easier to assemble.

    Conclusion

    After this rather long exposition of the dangers of outsourcing in the aerospace industry, here are some of the key takeaways:

    1. Outsourcing should not be employed as a tool for cost reduction. More likely than not it will lead to extra labour and higher costs via increased transportation, rework and inventories for the prime manufacturer, and therefore this extra price should be compensated by better design engineering or better manufacturing precision than could be achieved in-house.
    2. Efficiency is not the primary goal of the operation, but can be used as a useful metric of performance. The goal of the operation is to make money.
    3. A basic level of work has to be retained in-house in order to generate sufficient cash to fund new products and maintain a highly skilled workforce. If the latter requires extra capacity, a diversification to non-core activities may be a better option than reducing headcount.
    4. Scale matters. Cost saving techniques for standardised high-volume production are typically inappropriate for low-volume industries like aerospace.
    5. Recognise the power of incentives. In-house employees typically have more “skin in the game” as risk-sharing partners ,and therefore produce more dependable work than contractors.

    Sources

    [1] L.J. Hart-Smith. Out-sourced profits – the cornerstone of successful subcontracting. Boeing paper MDC 00K0096. Presented at Boeing Third Annual Technical Excellence (TATE) Symposium, St. Louis, Missouri, 2001.

    [2] N.N. Taleb. How to legally own another person. Skin in the Game. pp. 10-15. https://dl.dropboxusercontent.com/u/50282823/employee.pdf

  • What Creates Lift – How Do Wings Work?

    How airplanes fly is one of the most fundamental questions in aerospace engineering. Given its importance to flight, it is surprising how many different and oftentimes wrong explanations are being perpetuated online and in textbooks. Just throughout my time in school and university, I have been confronted with several different explanations of how wings create lift.

    Most importantly, the equal transit time theory, explained further below, is taught in many school textbooks and therefore instils faulty intuitions about lift very early on. This is not necessarily because more advanced theories are harder to understand or require a lot maths. In fact, the theory that requires the simplest assumptions and least abstraction is typically considered to be the most useful.

    In science, the simplicity of a theory is a hallmark of its elegance. According to Einstein (or Louis Zukofsky or Roger Sessions or William of Ockham…I give up, who knows), “everything should be made as simple as possible, but not simpler.” Hence, the strength of a theory is related to:

    • The simplicity of its assumptions, ideally as few as possible.
    • The diversity of phenomena the theory can explain, including phenomena that other theories could not explain.

    Keeping this definition in mind, let’s investigate some popular theories about how aircraft create lift.

    The first explanation of lift that I came across as a middle school student was the theory of “Equal Transit Times”. This theory assumes that the individual packets of air flowing across the top and bottom surfaces must reach the trailing edge of the airfoil at the same time. For this to occur, the airflow over the longer top surface must be travelling faster than the air flowing over the bottom surface. Bernoulli’s principle, i.e. along a streamline an increasing pressure gradient causes the flow speed to decrease and vice versa, is then invoked to deduce that the speed differential creates a pressure differential between the top and bottom surfaces, which invariably pushes the wing up. This explanation has a number of fallacies:

    • There is no physical law that requires equal transit times, i.e. the underlying assumptions are certainly not as simple as possible.
    • It fails to explain why aircraft can fly upside down, i.e. does not explain all phenomena.

    As this video shows, the air over the top surface does indeed flow faster than on the bottom surface, but the flows certainly do not reach the trailing edge at the same time. Hence, this theory of equal transit times is often referred to as the “Equal Transit Time Fallacy”.

    In order to generalise the above theory, while maintaining the mathematical relationship between speed and pressure given by Bernoulli’s principle, we can relax the initial assumption of equal transit time. If we start from a phenomenological observation of streamlines around an airfoil, as depicted schematically below, we see can see that the streamlines are bunched together towards the top surface of the leading edge, and spread apart towards the bottom surface of the leading edge. The flow between two adjacent streamlines is often called a streamtube, and the upper and lower streamtubes are highlighted in shades of blue in the figure below. The definition of a streamline is the line a fluid particle would traverse as it flows through space, and thus, by definition, fluid can never cross a streamline. As two adjacent streamlines form the boundaries of the streamtubes, the mass flow rate through each streamtube must be conserved, i.e. no fluid enters from the outside, and no fluid particles are created or destroyed. To conserve the mass flow rate in the upper streamline as it becomes narrower, the fluid must flow faster. Similarly, to conserve the mass flow rate in the lower streamtube as it widens, the fluid must slow down. Hence, in accordance with the speed-pressure relationship of Bernoulli’s principle, this constriction of the streamtubes means that we have a net pressure differential that generates a lift force.

    Flow lines around a NACA 0012 airfoil at 11° angle of attack, with upper and lower streamtubes identified.

    Of course, this theory does not explain why the upper streamtube contracts and the lower streamtube expands in the first place. An intuitive explanation for this involves the argument that the angle of attack obstructs the flow more towards the bottom of the airfoil than towards the top. However, this does not explain how asymmetric airfoils with pronounced positive camber at zero angle of attack, as shown in the figure below, create lift. In fact, such profiles were successfully used on early aircraft due to their resemblance to bird wings. Again, this theory does not explain all the physical phenomena we would like it to explain, and is therefore not the rigorous theory we are looking for.

    Asymmetric airfoil with pronounced camber
    Asymmetric airfoil with pronounced camber [1]

    Another explanation that is often cited for explaining lift is that the airfoil pushes air downwards, i.e. there is a net change of momentum in the vertical plane between the leading and trailing edges of the airfoil, and by necessity of Newton’s third law, this creates a lift force. Any object that experiences lift must certainly conform to the reality of Newton’s third law, but referring only to the difference in start and end conditions ignores the potential complexity of flow that occurs between these two stations. Furthermore, the question remains through what net angle the flow is deflected? One straightforward answer is the angle of incidence of the airfoil, but this ignores the upwash ahead of the wing or anything that happens behind the wing. Hence, the simple explanation of “pushing air downwards”, however elegant and correct, is an integral approach that summates the fluid mechanics between leading and trailing edges and leaves little to say of what happens in between. Indeed, as will be shown below, upwash and flow circulation play an equally important role in creating lift.

    Indeed, we can imagine a flow around a 2D cylinder shown in the figure below. The flow is symmetric from left-to-right and top-to-bottom and experiences no lift. If we now start the cylinder spinning at the rate Ω\Omega in the clockwise direction shown, the velocity of air increases on the upper surface (reduced pressure) and reduces on the lower surface (higher pressure). This asymmetric flow top-to-bottom therefore creates lift. Note that the rotation of the cylinder has moved the stagnation point towards the rear end of the cylinder (where the bottom and top flows converge) downwards and therefore broken the symmetry of the flow. Hence, in this example, lift is created by a combination of a free-stream velocity and flow circulation, i.e. air is “spun up” and not necessarily just deflected downwards (in this example upwash ahead of the cylinder matches the downwash aft).

    Flow around a cylinder
    Flow around a rotating cylinder that induces lift

    In the example above, lift was induced by creating an asymmetry in the curvature of the streamlines. In the stationary cylinder we had streamlines curving in one direction on the top surface, and by the same amount in the opposite direction on the bottom surface. Rotating the cylinder created an asymmetry in streamline curvature between the top and bottom surfaces (more curvature upwards then curvature downwards). We can create a similar asymmetry in the flow with a stationary cylinder by placing a small sharp-edged flap at the rear edge and positioned slightly downwards. Real viscous flow might not necessarily flow as smoothly around the little flap as shown in the diagram below, but this mental model is a neat tool to imagine how we can morphologically transition from a rotating cylinder that produces lift to an airfoil. This is shown via the series of diagrams below. This series of pictures shows that an airfoil creates a smoother variation in velocity than the cylinder, which leads to a smaller chance of boundary layer separation (a source of drag and in the worst-case scenario aerodynamic stall). A similar streamline profile could also be created with a symmetric airfoil that introduces asymmetry into the flow by being positioned at a positive angle of attack.

    The reason why differences in streamline curvature induce lift is addressed in a journal paper by Prof Holger Babinsky, which is free to download. If we consider purely stead-state flow and neglect the effects of gravity, surface tension and friction we can derive some very basic, yet insightful, equations that explain the induced pressure difference. Quite intuitively this argument shows that a force acting parallel to a streamline causes the flow to accelerate or decelerate along its tangential path, whereas a force acting perpendicular to the flow direction causes the streamline to curve.

    The first case is described mathematically by Bernoulli’s principle and depicted in the figure below. If we imagine a small fluid particle of finite length l situated in a field of varying pressure, then the front and back surfaces of the particle will experience different pressures. Say the pressure increases along the streamline, then the force acting on the front face pointing in the direction of motion is greater than the force acting on the rear surface. Hence, according to Newton’s second law, this increasing pressure field along the streamline causes the flow speed to decrease and vice versa. However, this approach is valid only along a single streamline. Bernoulli’s principle can not be used to relate the speed and pressures of adjacent streamlines. Thus, we can not use Bernoulli’s principle to compare the flows on the bottom and top surfaces of an airfoil, and therefore can say little about their relative pressures and speeds.

    Flow along a straight streamline [2]
    Flow along a straight streamline [2]

    However, consider the curved streamlines shown in the figure below. If we assume that the speed of the particle travelling along the curved streamline is constant, then Bernoulli’s principle states that the pressure along the streamline can not change either. However, the velocity vector v is changing, as the direction of travel is changing along the streamline. According to Newton’s second law, this change in velocity, i.e. acceleration, must be caused by a net centripetal force acting perpendicular to the direction of the flow. This net centripetal force must be caused by a pressure differential on either side of the particle as we have ignored the influence of gravity and friction. Hence, a curved streamline implies a pressure differential across it, with the pressure decreasing towards the centre of curvature.

    Flow along a curved streamline [2]
    Flow along a curved streamline [2]

    Mathematically, the pressure difference across a streamline in the direction n pointing outwards from the centre of curvature is

    dpdn=ρv2R\frac{\mathrm{d}p}{\mathrm{d}n} = \rho \frac{v^2}{R}

    where R is the radius of curvature of the flow and ρ\rho is the density of the fluid.

    One positive characteristic of this theory is that it explains other phenomena outside our interest in airfoils. Vortices, such as tornados, consist of concentric circles of streamlines, which suggests that the pressure decreases as we move from the outside to the core of the vortex. This observation agrees with our intuitive understanding of tornados sucking objects into the sky.

    With this understanding we can now return to the study of airfoils. Consider the simple flow path along a curved plate shown in the figure below. At point A the flow field is unperturbed by the presence of the airflow and the local pressure is equal to the atmospheric pressure patmp_{atm}. As we move down along the dashed curve we see that the flow starts to curve around the curved plate. Hence, the pressure is decreasing as we move closer to the airfoil surface and pB<patmp_B < p_{atm}. On the bottom half the situation is reversed. Point C is again undisturbed by the airflow but the flow is increasingly curved as me closer to D. However, when moving from C to D, the pressure is increasing because pressure increases moving away from the centre of curvature, which on the bottom of the airfoil is towards point C. Thus, pD>patmp_D > p_{atm} and by the transitive property pB<pDp_B < p_D such that the airfoil experiences a net upward lift force.

    Flow around a curved airfoil [2]
    Flow around a curved airfoil [2]

    From this exposition we learn that any shape that creates asymmetric curvature in the flow field can generate lift. Even though friction has been neglected in this analysis, it is crucial in forcing the fluid to adhere to the surfaces of the airfoil via a viscous boundary layer. Therefore, the inclusion of friction does not change the theory of lift due to streamline curvature, but provides an explanation for why the streamlines are curved in the first place.

    A couple of interesting observations follow from the above discussion. Nature typically uses thin wings with high camber, whereas man-made flying machines typically have thicker airfoils due to their improved structural performance, i.e. stiffness. In the figure below, the deep camber thinner wing shows highly curved flow in the same direction on both the top and bottom surfaces.

    Deep camber thin wing with high lift [2]
    Shallow camber thick wing with less lift [2]

    The more shallow camber thicker wing has flow curved in two different directions on the bottom surface and will therefore result in less pressure difference between the top and bottom surfaces. Thus, for maximum lift, the thin, deeply cambered airfoils used by birds are the optimum configuration.

    In conclusion, we have investigated a number of different theories explaining how lift is created around airfoils. Each theory was investigated in terms of the simplicity and validity of its underlying assumptions, and the diversity of phenomena it can describe. The theories based on Bernoulli’s principle, such as the equal transit time theory and the contraction of streamtubes theory, were either based on faulty initial assumptions, i.e. equal time, or failed to explain why streamtubes should contract or expand in the first place. The theory based on airfoils deflecting airflow downwards is theoretically accurate and correct (Newton’s third law: changes in fluid momentum over a control volume including the airfoil lead to a reactive lift force), but by being an integral approach it is not helpful in explaining what occurs between the leading and trailing edges of the airfoil (e.g. upwash is also a contributing factor to lift).

    A more intricate theory is that curved bodies induce curved streamlines, as the inherent viscosity of the fluid forces the fluid to adhere to the surface of the body via a boundary layer. The centripetal forces that arise in the curved flow lead to a drop in pressure across the streamlines towards the centre of curvature. This means that if a body leads to asymmetric curved streamlines across it, then the induced pressure differential arising from the asymmetry induces a net lift force.

    Edits and Acknowledgments

    A previous version of this article referenced a misleading and incorrect example of a highly cambered airfoil as a counterexample to the theory of airfoils deflecting airflow downwards and the theoretical explanation using control volumes. Dr Thomas Albrecht of Monash University pointed this error out to me (see the discussion in the comments) and his contribution in improving the article is gratefully acknowledged.

    Photo credit

    [1] DThanhvp. Photobucket. http://s37.photobucket.com/user/DThanhvp/media/American.jpg.html

    [2] Babinsky, H. (2003). How do wings work?. Physics Education 38(6) pp. 497-503. URL: http://iopscience.iop.org/article/10.1088/0031-9120/38/6/001/pdf;jsessionid=64686DBCB81FEB401CFFB87E18DFE6DA.c1

  • The Navier-Stokes Equation

    The name we use for our little blue planet “Earth” is rather misleading. Water makes up about 71% of Earth’s surface while the other 29% consists of continents and islands. In fact, this patchwork of blue and brown, earth and water, makes our planet very unlike any other planet we know to be orbiting other stars. The word “Earth” is related to our longtime worldview based on a time when we were constrained to travelling the solid parts of our planet. Not until the earliest seaworthy vessels, which were believed to have been used to settle Australia some 45,000 years ago, did humans venture onto the water.

    Not until the 19th century did humanity make a  strong effort to travel through another vast sea of fluid, the atmosphere around us. Early pioneers in China invented ornamental wooden birds and primitive gliders around 500 BC, and later developed small kites to spy on enemies from the air. In Europe, the discovery of hydrogen in the 17th century inspired intrepid pioneers to ascend into the lower altitudes of the atmosphere using rather explosive balloons, and in 1783 the brothers Joseph-Michel and Jacques-Étienne Montgolfier demonstrated a much safer alternative using hot-air balloons.

    The pace of progress accelerated dramatically around the late 19th century culminating in the first heavier-than-air flight by Orville and Wilbur Wright in 1903. Just 7 years later the German company DELAG invented the modern airline by offering commercial flights between Frankfurt and Düsseldorf using Zeppelins. After WWII commercial air travel shrunk the world due to the invention and proliferation of the jet engine. Until a series of catastrophic failures the DeHavilland Comet was the most widely-used aircraft but was then superseded in 1958 by one of the iconic aircrafts, the Boeing 707. Soon military aircraft began exploring the greater heights of our atmosphere with Yuri Gagarin making the first manned orbit of Earth in 1961, and Neil Armstrong and Buzz Aldrin walking on the moon in 1969, a mere 66 years after the first flight at Kittyhawk by the Wright brothers.

    Air and space travel has greatly altered our view of our planet, one from the solid, earthly connotations of “Earth” to the vibrant pictures of the blue and white globe we see from space. In fact the blue of the water and the white of the air allude to the two fluids humans have used as media to travel and populate our planet to a much greater extent than travel on solid ground would have ever allowed.

    Fundamental to the technological advancement of sea- and airfaring vehicles stood a physical understand of the media of travel, water and air. In water, the patterns of smooth and turbulent flow are readily visible and this first sparked the interest of scientists to characterise these flows. The fluid for flight, air, is not as easily visible and slightly more complicated to analyse. The fundamental difference between water and air is that the latter is compressible, i.e. the volume of a fixed container of air can be decreased at the expense of increasing the internal pressure, while water is not. Modifying the early equations of water to a compressible fluid initiated the scientific discipline of aerodynamics and helped to propel the “Age of Flight” off the ground.

    One of the groundbreaking treatises was Daniel Bernoulli’s Hydrodynamica published in 1738, which, upon other things, contained the statement many of us learn in school that fluids travel faster in areas of lower than higher pressure. This statement is often used to incorrectly explain why modern fixed-wing aircraft induce lift. According to this explanation the curved top surface of the wing forces air to flow quicker, thereby lowering the pressure and inducing lift. Alas, the situation is slightly more complicated than this. In simple terms, lift is induced by flow curvature as the centripetal forces in these curved flow fields create pressure gradients between the differently curved flows around the airfoil. As the flow-visualisation picture below shows, the streamlines on the top surface of the airfoil are most curved and this leads to a net suction pressure on the top surface. In fact, Bernoulli’s equation is not needed to explain the phenomenon of lift. For a more detailed explanation of why this is so I highly recommend the journal article on the topic by Dr. Babinsky from Cambridge University.

    Flow lines around an airfoil (Source: Wikimedia Commons https://en.wikipedia.org/wiki/File:Airfoil_with_flow.png)

    Just 20 years after Daniel Bernoulli’s treatise on incompressible fluid flow, Leonard Euler published his General Principles of the Movement of Fluids, which included the first example of a differential equation to model fluid flow. However, to derive this expression Euler had to make some simplifying assumptions about the fluid, particularly the condition of incompressibility, i.e. water-like rather than air-like properties, and zero viscosity, i.e. a fluid without any stickiness. While, this approach allowed Euler to find solutions for some idealised fluids, the equation is rather too simplistic to be of any use for most practical problems.

    A more realistic equation for fluid flow was derived by the French scientist Claude-Louis Navier and the Irish mathematician George Gabriel Stokes. By revoking the condition of inviscid flow initially assumed by Euler, these two scientists were able to derive a more general system of partial differential equations to describe the motion of a viscous fluid.

    [latex]\rho\left(\frac{\partial\boldsymbol{v}}{\partial t}+\boldsymbol{v}\cdot\nabla\boldsymbol{v}\right)=-\nabla p+\nabla\cdot\boldsymbol{T}+\boldsymbol{f}[/latex]

    The above equations are today known as the Navier-Stokes equations and are infamous in the engineering and scientific communities for being specifically difficult to solve. For example, to date it has not been shown that solutions always exist in a three-dimensional domain, and if this is the case that the solution in necessarily smooth and continuous. This problem is considered to be one of the seven most important open mathematical problems with a $1m prize for the first person to show a valid proof or counter-proof.

    Fundamentally the Navier-Stokes equations express Newton’s second law for fluid motion combined with the assumption that the internal stress within the fluid is equal to diffusive (“spreading out”) viscous term and the pressure of the fluid – hence it includes viscosity. However, the Navier-Stokes equations are best understood in terms of how the fluid velocity, given by [latex]\boldsymbol{v}[/latex] in the equation above, changes over time and location within the fluid flow. Thus, [latex]\boldsymbol{v}[/latex] is an example of a vector field as it expresses how the speed of the fluid and its direction change over a certain line (1D), area (2D) or volume (3D) and with time [latex]t[/latex].

    The other terms in the Navier-Stokes equations are the density of the fluid [latex]\rho[/latex], the pressure [latex]p[/latex], the frictional shear stresses [latex]\boldsymbol{T}[/latex], and body forces [latex]\boldsymbol{f}[/latex] which are forces that act throughout the entire body such as inertial and gravitational forces. The dot is the vector dot product and the nabla operator [latex]\nabla[/latex] is an operator from vector calculus used to describe the partial differential in three dimensions,

    [latex]\nabla = \left(\frac{\partial}{\partial x},\frac{\partial}{\partial y},\frac{\partial}{\partial z}\right)[/latex]

    In simple terms, the Navier-Stokes equations balance the rate of change of the velocity field in time and space multiplied by the mass density on the left hand side of the equation with pressure, frictional tractions and volumetric forces on the right hand side. As the rate of change of velocity is equal to acceleration the equations boil down to the fundamental conversation of momentum expressed by Newton’s second law.

    One of the reasons why the Navier-Stokes equation is so notoriously difficult to solve is due to the presence of the non-linear [latex]\boldsymbol{v}\cdot\nabla\boldsymbol{v}[/latex] term. Until the advent of scientific computing engineers, scientists and mathematicians could really only rely on very approximate solutions. In modern computational fluid dynamics (CFD) codes the equations are solved numerically, which would be prohibitively time-consuming if done by hand. However, in some complicated practical applications even this numerical approach can be become too complicated such that engineers have to rely on statistical methods to solve the equations.

    The complexity of the solutions should not come as a surprise to anyone given the numerous wave patterns, whirlpools, eddies, ripples and other fluid structures that are often observed in water. Such intricate flow patterns are critical for accurately modelling turbulent flow behaviour which occurs in any high velocity, low density flow field (strictly speaking, high Reynolds number flow) such as around aircraft surfaces.

    Nevertheless, as the above simulation shows, the Navier-Stokes equation has helped to revolutionise modern transport and also enabled many other technologies. CFD techniques that solve these equations have helped to improve flight stability and reduce drag in modern aircraft, make cars more aerodynamically efficient, and helped in the study of blood flow e.g. through the aorta. As seen in the linked video, fluid flow in the human body is especially tricky as the artery walls are elastic. Thus, such an analysis requires the coupling of fluid dynamics and elasticity theory of solids, known as aeroelasticity. Furthermore, CFD techniques are now widely used in the design of power stations and weather predictions.

    In the early days of aircraft design, engineers often relied on back-of-the-envelope calculations, intuition and trial and error. However, with the increasing size of aircraft, focus on reliability and economic constraints such techniques are now only used in preliminary design stages. These initial designs are then refined using more complex CFD techniques applied to the full aircraft and locally on critical components in the detail design stage. Equally, it is infeasible to use the more detailed CFD techniques throughout the entire design process due to the lengthy computational times required by these models.

    Physical wind tunnel experiments are currently indispensable for validating the results of CFD analyses. The combined effort of CFD and wind-tunnel tests was critical in the development of supersonic aircraft such as the Concorde. Sound travels via vibrations in the form of pressure waves and the longitudinal speed of these vibrations is given by the local speed of sound which is a function of the fluids density and temperature. At supersonic speeds the surrounding air molecules cannot “get out of the way” before the aircraft arrives and therefore air molecules bunch up in front of the aircraft. As a result, a high pressure shock wave forms in these areas that is characterised by an almost instantaneous change in fluid temperature, density and pressure across the shock wave. This abrupt change in fluid properties often leads to complicated turbulent flows and can induce unstable fluid/structure interactions that can adversely influence flight stability and damage the aircraft.

    The problem with performing wind-tunnel tests to validate CFD models of these phenomena is that they are expensive to run, especially when many model iterations are required. CFD techniques are comparably cheaper and more rapid but are based on idealised conditions. As a result, CFD programs that solve Navier-Stokes equations for simple and more complex geometries have become an integral part of modern aircraft design, and with increasing computing power and improved numerical techniques will only increase in importance over the coming years. In any case, the story of the Navier-Stokes equation is a typical example of how our quest to understand nature has provided engineers with a powerful new tool to design improved technologies to dramatically improve our quality of life.

    References

    If you’d like to know more about the Navier-Stokes equations or 16 other equations that have changed the world, I highly recommend you check out Ian Stewart’s book of the same name.

    Ian Stewart – In Pursuit of the Unknown: 17 Equations That Changed the World. Basic Books. 2013.

  • NASA Langley Research Center

    Earlier this year, I had the privilege of working on a research project at NASA’s Langley Research Centre. Apart from interacting with world-renowned scientists and engineers, what impressed me most was the mind-blowing heritage of the site.

    NASA Langley Research Center Sign

    NASA Langley is the birthplace of large-scale, government-funded aeronautical research in the US. It was home to research on WWII planes, supersonic aircraft, the lunar landers and the Space Shuttle. Who knows how the Space Race would have panned out without the engineers at NASA Langley?

    Today, Langley is at the helm of leading aeronautical engineering into the 21st century with technologies such as advanced composites, alternative jet fuels and the journey to Mars.

    NASA Langley was established in 1917 as NACA’s (short for National Advisory Committee for Aeronautics and renamed to NASA in 1958) first field centre and is named after the Wright brothers rival Samuel Pierpont Langley, who’s Aerodrome flyer twice failed to cross the Potomac river in 1903.

    Amid the new composites facilities I was working on  are strewn old gems such as NACA wind-tunnels from the 1920s and 1930s, and the massive “Lunar Landing Research Facility”, or simply “The Gantry”, used to test the Apollo lunar landings in the 1960’s. During Project Mercury NASA Langley was the home of the Space Task Group, a team of engineers spearheading NASA’s first human spaceflight between 1958 and 1963. The gantry has since been re-purposed for land-based crash landings, such as on the Orion spacecraft.

    NASA Langley Test Gantry
    NASA Langley Test Gantry [1]
    Another historic site is the Aircraft Landing Dynamics Facility (ALDF), a train carriage that could be accelerated by 20Gs up to 230 mph by a water-jet spewing out the rear, and used to test impact on landing gears and airfield surfaces.  The facility has provided NASA and its partners and invaluable capability to test tires, landing gear and understand the mechanism of runway friction. Prior to WWII many engineers were convinced that the abundance of rivers and sea water would mean that the aircraft would land primarily on water. As a result research on the mechanics of landing on terra firma was lagging behind and post WWII almost a third of all aircraft accidents could be attributed to landing issues [2]. Throughout its 52 years of operation the ALDF has saved thousands of lives by making aircraft safer.

    As the centre’s original aim was to explore the field of aeronautics, specifically aerodynamics and propulsion, the world’s largest wind tunnel was constructed at Langley in 1934. At the time the Full-Scale Wind Tunnel was one of the first to fit an entire full-scale aircraft with a whopping 30 by 60 foot cross-section. The tunnel’s 4000 bhp electric motors (4000 bhp !!) accelerated the airflow to 118 mph (181 km/hr) and was used to test basically every WWII aircraft prototype. After the war, both the F-16 and the Space Shuttle were tested in the Full-Scale Wind Tunnel. Even though it was declared a National Historic Landmark in 1985 it was demolished in 2010.

    Full Scale Wind Tunnel
    Full Scale Wind Tunnel [3]
    As rocket research gained importance in the 1940’s the capabilities were extended from subsonic to supersonic and even hypersonic research. Even today the importance of aerodynamics research is obvious as one drives past the 14×22 foot subsonic wind tunnel on the way to the main gate.

    The 1930s in the USA were a golden age for aeronautics. Before World War I, the US government and military did not place high priority on aeronautics research. In fact the total research spending between 1908 and 1913 totalled a measly $435,000 compared to a whopping $28 million spent by Germany. Thus put the US behind countries like Brazil, Chile, Bulgaria, Spain and Greece [4].

    NASA Langley subsonic wind tunnel [2]
    NASA Langley subsonic wind tunnel on the way to the main gate [5]
    All of this changed when aeronautical research started to kick-off at NACA, specifically at Langley Research Center. In the 1930’s aerodynamicist Eastman Jacobs developed a systematic way of designing airfoil shapes, and to this day standard wing shapes are designated with a NACA identification number.

    During the 1930s various airshows and flying competitions in Europe sparked competition to design the fastest aircraft. For example, the Schneider Trophy was an annual competition for seaplanes and was won on three occasions by Supermarine aircraft designed by Reginald J. Mitchell, who later used the insights gained from these competitions to design the iconic WWII fighter Supermarine Spitfire. However, at some point the speed records hit a wall just shy of the speed of sound and it was unclear if it was possible to break the “Sound Barrier” at all.

    Researchers were having a tough time figuring out why drag increased and lift decreased as an aircraft approached the speed of sound. It was not until 1934 that a young Langley researcher John Stack captured the culprit on a photograph of a high-speed wind tunnel test of an airfoil.

    As the aircraft airspeed approaches the speed of sound, small pockets of supersonic flow develop on the suction surface of the airfoil as the airflow accelerates over the curved profile. For thermodynamic reasons these pockets of supersonic flow terminate in normal shock waves and the ensuing increase in pressure exacerbates the adverse pressure gradient on the suction surface. Ultimately, this leads to premature boundary layer separation and thereby decreases lift and increases drag (see figure below). John Stack was the first person to capture this phenomenon on film and paved the way for supersonic flight in the years to come.

    Transonic shock wave [4]
    Transonic shock wave [6]
    Other major accomplishments of NASA Langley Research Center include:

    • The idea of designing specific research aircraft dedicated to supersonic flight, which led to the world’s first transonic wind tunnel
    • Simulation and testing of landing in lunar gravity using the Lunar Landing Facility
    • The Viking program for Mars exploration
    • 5 Collier trophies, the U.S. aviation’s more prestigious award, including the 1946 trophy to Lewis A. Rodert, Lawrence D. Bell and a certain Chuck Yeager for the development of a wing deicing system. Fred Welck won the trophy in 1929 for the NACA cowling, an engine cover for drag reduction and improved engine cooling
    • The grooving of aircraft runways to improve the grip of aircraft tires by reducing aquaplaning, now an international standard for all runways around the world.

    Grooved airport runway [3]
    Grooved airport runway [7]
    On March 3rd the NASA reached a major milestone by celebrating its centennial. Since 1917 Langley Research Center has played an important role in the successes of American and international air and space travel. In recent years the media has focused mostly on new commercial space companies such as Orbital Sciences and Space-X.

    But as Elon Musk rightly points out, Space X’s exploits would not be possible without NASA’s achievements throughout the last 100 years and its continuing support of the private sector. In fact, NASA made one of it’s first steps into public-private partnerships as early as the 1940’s with the development of the Bell X-1, the first manned aircraft to break the sound barrier.

    In that respect join me in congratulating NASA to its centennial and to more exciting aerospace developments for the next 100 years!

     

    References

    [1] “Nasa langley test gantry” by Unknown – NASA. Licensed under Public Domain via Wikimedia Commons – https://commons.wikimedia.org/wiki/File:Nasa_langley_test_gantry.jpg#/media/File:Nasa_langley_test_gantry.jpg

    [2] “Shooting for a better understanding of aircraft targets, ALDF hit its target” by Sam MacDonald (2015). http://www.nasa.gov/langley/shooting-for-a-better-understanding-of-aircraft-landings-aldf-hit-its-target . Published 8 May 2015. Accessed 22 May 2015.

    [3] “Full Scale Wind Tunnel (NASA Langley)” by Photocopy of photograph (original in the Langley Research Center Archives, Hampton, VA [LaRC]) (L73-5028). Licensed under Public Domain via Wikimedia Commons – https://commons.wikimedia.org/wiki/File:Full_Scale_Wind_Tunnel_(NASA_Langley).jpg#/media/File:Full_Scale_Wind_Tunnel_(NASA_Langley).jpg

    [4] “Nine notable facts about NACA” by Joe Atkinson (2015) http://www.nasa.gov/larc/nine-notable-facts-about-the-naca. Published 30 March 2015. Accessed 22 May 2015.

    [5] “14×22 Subsonic Tunnel NASA Langley” by Erik Axdahl Axda0002. Licensed under CC BY-SA 2.5 via Wikimedia Commons – https://commons.wikimedia.org/wiki/File:14x22_Subsonic_Tunnel_NASA_Langley.jpg#/media/File:14x22_Subsonic_Tunnel_NASA_Langley.jpg

    [6] “Transonic flow patterns” by U.S. Federal Aviation Administration – Airplane Flying Handbook. U.S. Government Printing Office, Washington D.C.: U.S. Federal Aviation Administration, p. 15-7. FAA-8083-3A.. Licensed under Public Domain via Wikimedia Commons – https://commons.wikimedia.org/wiki/File:Transonic_flow_patterns.svg#/media/File:Transonic_flow_patterns.svg

    [7] “Pista Congonhas03” by Valter Campanato/ABr – Agência Brasil. Licensed under CC BY 3.0 br via Wikimedia Commons – https://commons.wikimedia.org/wiki/File:Pista_Congonhas03.jpg#/media/File:Pista_Congonhas03.jpg

  • Big Data in Aerospace

    “Big data” is all abuzz in the media these days. As more and more people are connected to the internet and sensors become ubiquitous parts of daily hardware an unprecedented amount of information is being produced. Some analysts project 40% growth in data over the next decade, which means that in a decade 30 times the amount of data will be produced than today. Given this this trend, what are the implications for the aerospace industry?

    Big Data
    Big data: According to Google a “buzzword to describe a massive volume of both structured and unstructured data that is so large that it’s difficult to process using traditional database and software techniques.”

    Fundamentally, big data is nothing new for the aerospace industry. Sensors have been collecting data on aircraft for years ranging from binary data such as speed, altitude and stability of the aircraft during flight, to damage and crack growth progression at service intervals. The authorities and parties involved have done an incredible job at using routine data and data gathered from failures to raise safety standards.
    What exactly does “big data” mean? Big data is characterised by a data stream that is high in volume, high velocity and coming from multiple sources and in a variety of forms. This combination of factors makes analysing and interpreting data via a live stream incredibly difficult, but such a capability is exactly what is needed in the aerospace environment. For example, structural health monitoring has received a lot of attention within research institutes because an internal sensory system that provides information about the real stresses and strains within a structure could improve prognostics about the “health” of a part and indicate when service intervals and replacements are needed. Such a system could look at the usage data of an aircraft and predict when a component needs replacing. For example, the likelihood that a part will fail could be translating into an associated repair that is the best compromise in terms of safety and cost. Furthermore, the information can be fed back to the structural engineers to improve the design for future aircraft. Ideally you want to replicate the way the nervous system uses pain to signal damage within the body and then trigger a remedy. Even though structural health monitoring systems are feasible today, analysing the data stream in real time and providing diagnostics and prognostics remains a challenge.
    Other areas within aerospace that will greatly benefit from insights gleaned from data streams are cyber security, understanding automation and the human-machine interaction, aircraft under different weather and traffic situations and supply chain management. Big data could also serve as the underlying structure that establishies autonomous aircraft on a wide scale. Finally, big data opens the door for a new type of adaptive design in which data from sensors are used to describe the characteristics of a specific outcome, and a design is then iterated until the desired and actual data match. This is very much an evolutionary, trail-and-error approach that will be invaluable for highly complex systems where cause and effect are not easily correlated and deterministic approaches are not possible. For example, a research team may define some general, not well defined hypothesis about a future design or system they are trying to understand, and then use data analytics to explore the available solutions and come up with initial insights into the governing factors of a system. In this case it is imperative to fail quickly and find out out what works and what does not. The algorithm can then be refined iteratively by using the expertise of an engineer to point the computer in the right direction.
    Thus, the main goal is to turn data into useful, actionable knowledge. For example in the 1990’s very limited data existed in terms of understanding the airport taxi-way structure. Today we have the opposite situation in that we have more data than we can actually use. Furthermore, not only the quantity but also quality of data is increasing rapidly such that computer scientists are able to design more detailed models to describe the underlying physics of complex systems. When converting data to actionable information one challenge is how to account for as much of the data as possible before reaching a conclusion. Thus, a high velocity, high volume and diverse data stream may not be the most important characteristic for data analytics. Rather it is more important that the data be relevant, complete and measurable. Therefore good insights can also be gleaned from smaller data if the data analytics is powerful.
    While aerospace is neither search nor social media, big data is incredibly important because the underlying stream from distributed data systems on aircraft or weather data systems can be aggregated and analysed in consonance to create new insights for safety. Thus, in the aerospace industry the major value drivers will be data analytics and data science, which will allow engineers and scientists to combine datasets in new ways and gain insights from complex systems that are hard to analyse deterministically. The major challenge is how to upscale the current systems into a new era where the information system is the foundation of the entire aerospace environment. In this manner data science will transform into a fundamental pillar of aerospace engineering, alongside the classical foundations such as propulsion, structures, control and aerodynamics.
  • Variable Stiffness Composites

    In previous posts I have discussed the unique characteristics and manufacturing processes of a certain type of composite material, namely continuous fibre-reinforced plastics (FRPs). Just like many other composite materials, FRPs combine two or more materials whose combined properties are superior (in a practical engineering sense) to the properties of the constituent materials on their own. What distinguishes FRPs from other composites such as short-fibre composites, nanocomposites or discrete particle composites are the highly aligned, long bundles of fibres typically glass or carbon that are arranged in a specific direction within some resin system.

    The biggest advantage of FRPs compared to metals is not necessarily their greater specific strength and stiffness (i.e. strength/density and stiffness/density) but the increased design freedom to tailor the structural behaviour. Metals and ceramics, being isotropic materials, behave in an intuitive way since the majority of the coupling terms in the stiffness tensor vanish. If you a imagine a three-dimensional cube and pull two opposing faces apart then the other two pairs of opposing faces will move towards each other. This phenomenon of coupling between tension and compression is known as the Poisson’s effect and aptly captured by the Poisson’s ratio.

    The Poisson's effect in action
    The Poisson’s effect in action

    In bending, a similar phenomenon occurs known as anti-clastic curvature. If you have ever tried bending a thin, beam-like structure made out of a soft material e.g. a rubber eraser, you might have noticed that the beam wants to develop opposite curvature in the transverse direction to the main bending axis. The structure morphs into some form of saddle shape as shown in the figure. The phenomenon occurs because the bending moment applied by the person in the picture causes tension in the top surface and compression in the bottom surface in the direction of applied bending. From the Poisson’s effect we know that this induces compression in the top surface and tension in the bottom surface in the transverse direction. By analogy, this is exactly the reverse of the bending moment applied by the hands and so the panel bends in the opposite sense in the transverse direction.

    Anticlastic curvature in action (1)
    Anticlastic curvature in action (1)

    For isotropic materials the fundamental linear constitutive equations between stress and strain eliminate a lot of the possible coupling behaviour. There is no coupling between applied bending moments and twisting. No coupling between stretching/compressing and bending/twisting. And also no coupling between stretching/compressing and shearing. FRPs, being orthotropic materials, i.e. having two orthogonal axes of different material properties, can display all of these effects. Consider a single layer of a continuous fibre-reinforced composite in the figure below. The material axes 1-2 denote the stiffer fibre in the 1-direction and the weaker resin in the 2-direction. If we align the fibres with the global x-axis and apply a load in the x-direction, the layer will stretch/compress along the fibres and compress/stretch in the resin direction in the same way as described previously for isotropic materials. However, if the fibres are aligned at an angle to the x-direction say 45°, and a load is applied in the x-direction then the layer will not only stretch/compress in the x-direction and compress/stretch in the y-direction but also shear. This is because the layer will stretch/compress less in the fibre direction than in the resin direction. This effect can be precluded if the number of +45° layers is balanced by an equal amount of -45° layers stacked on top of each other to form a laminate, e.g. a [45,-45,-45,45] laminate. However, this [45,-45,-45,45] laminate will exhibit bend-twist coupling because the 45° layers are placed further away from the mid plane than the the -45° layers. The bending stiffness of a layer is a factor of the layer thickness cubed and the distance from the axis of bending (here the mid plane) squared. Thus, the outer 45° layers contribute more to the bending stiffness of the laminate than the -45° layers such that the coupling effects do not cancel.

    A single fibre reinforced plastic layer with material and global coordinate systems
    A single fibre reinforced plastic layer with material and global coordinate systems

    Using metals, structural designers were constrained to tailoring the shape of a structure to optimise its performance i.e. thickness, length and width, and overall profile/shape. FRPs however add an extra dimension for optimisation by allowing designers to tailor the properties through the thickness and thereby achieve all kinds of interesting effects. For example, forward-swept wings on aircraft have and still are a nightmare due to aeroelastic instabilities like flutter and divergence. Basically, sweeping a wing forward is a neat idea because the airflow over swept wings flows spanwise towards the end furthest to the rear of the plane. Therefore, the tip-stall condition characteristic of backward-swept wings is moved towards the fuselage where it can be controlled more effectively.  The drawback is that as the lift force bends the wingtip upwards the angle of attack increases, further increasing the lift and thereby causing more bending, and so on until the wings snap off or fail. Rather than adding more material to the wing to make it stiffer (but also heavier) an alternative solution is to use the bend-twist coupling capability of composite laminates. This was successfully achieved in the iconic Grumman X-29. As the bending loads force the wing tips to bend upward and twist the wing to higher angles of attack, the inherent bend-twist coupling of the composite laminate used forces the wing to twist in the opposite direction and thereby counters an increase in the angle of attack. This is an excellent example of an efficient, autonomous and passively activated control system to prevent divergence failure.

    Grumman X-29 with forward-swept wings
    Grumman X-29 with forward-swept wings

    In this manner, straight fibre composites allow structural engineers to change the stiffness and strength properties through the thickness in order to tailor the structural behaviour. The concept of variable stiffness composites adds a further dimension to the capability for tailoring. Currently this is achieved by spatially varying the point wise fiber orientations by actively steering individual fibre tows using automatic fibre placement machines. One early application that was considered by researchers was improving the stress concentrations around holes by steering fibres around them.

    Automated Fibre Placement machine (2)
    Automated Fibre Placement machine (2)

    This concept can be generalised by aligning fibres with the direction of local primary load paths which could vary across different parts of the structure. Tow steering creates the possibility for designing blended structures by facilitating smooth transitions between areas with different layup requirements. One promising application of variable stiffness composites is in buckling and postbuckling optimisation of flat and curved panels. As a panel is compressed uni-axially the capability of the panel to resist transverse bending loads reduces until a critical level is reached where the panel has lost all capability to sustain any bending loads. At this point known as the buckling load, the fundamental state of compression becomes unstable and the panel buckles outward in a single or multiple waves. It has been found that variable stiffness composites can double the buckling load of flat panels by favourably redistributing the load paths in the fundamental, pre-buckling compression state. Essentially, the middle of the panel where the buckling waves will occur is offloaded, and the edges of the panel are forced to take more load. Thus, the aim is to redirect loads to locally supported regions and remove load from regions remote from supported boundaries. This concept has also been extended to improving aircraft fuselage sections and blade-stiffened panels.

    A variable angle tow laminate
    A variable angle tow laminate (3)

    This new technology is viewed as a promising candidate for further reducing the mass of future aerospace structures. In fact recently NASA Langley Research Centre announced that they are investing heavily in this capability. The possibility of manufacturing integrated structures with smooth flow of material between components and minimal joints will not only revolutionise stress-based design, but also simplify manufacturing and facilitate entirely new aircraft designs that are currently unfeasible. In trees for example, there is a smooth transition of fibres from the trunk into the branches to strengthen the connecting joint. With the variable stiffness capabilities investigated by NASA we could apply this concept to simplify and even strengthen critical interfaces such as fuselage-wing connections.

    References

    (1) http://www.astm.org/HTTP/IMAGES/70104.gif

    (2) http://csmres.co.uk/cs.public.upd/article-images/Premium-nordenham.jpg

    (3) Kim et al. (2012). “Continuous Tow Shearing for Manufacturing Variable Angle Tow Composites”. Composites: Part A, 43, pp. 1347-1356

     

  • The Airline Metro System

    When I was travelling in Chile a short while ago I took a flight from the capital Santiago de Chile to the city of Calama in the Atacama dessert. What was interesting about this flight, was that on its way to Calama the airplane landed for a short stop in Copiapó. Immediately after leaving the runway the doors opened, a couple of people got off and were immediately replaced by others already waiting on the tarmac. I had never seen this metro-style system of operating an airline before and was surprised how efficient this system was being implemented. I was also struck by the albeit ludicrous idea of operating an air-bus (no pun intended) style fixed travel route between major European cities, say London-Paris-Madrid-Rome-Vienna-Berlin-London, with people hopping on and off at their pleasure. How cool would that be?

    I understand that the fixed costs of this system would be relatively high, and making any money on the tight margins that airliners are operating on would be incredibly tough. However, research is currently ongoing to realise a similar system for long distance travel. One possibility is exploiting the concept of air-to-air refuelling that has been used by the military and the Air Force One for many years. A collaborative European study Research on a Cruiser-Enabled Air Transport Environment (Recreate) has been running simulations at the National Aerospace Laboratory (NLR) in Amsterdam since 2011. The aim of these simulations is to investigate the technical challenges and potential savings of refuelling airliners in midair.

    Leading Boeing 707 refuelling a trailing 747 using a rearward extended boom
    Leading Boeing 707 refuelling a trailing 747 using a rearward extended boom

    This may sound like a fanciful notion but given that airlines have to cut the 2005 carbon emissions in half by 2050 it well worth looking into these radical ideas. In fact, preliminary results of the study show that fuel burn could be reduced by 11% to 23% if airliners could be refuelled by tanker planes. Passenger safety being paramount in civil aircraft the military concepts currently in use will have to be adapted to meet the required reliability standards. In military environments the tanker flies ahead of the aircraft and supplies fuel through a boom from above. To reduce the likelihood of collisions a forward extending boom refuelling from the bottom is the solution preferred by the researchers. In this manner the civil aircraft does not fly in the wake of the tanker, which could affect turbulence and passenger comfort. Furthermore, the responsibility and training remains with the tanker pilots who have better visibility of the refuelling process when flying from the rear.

    The researchers also intend to take the concept one step further by exchanging cargo and passengers in midair, thus getting closer to the idea of an airline metro system. This research envisions a new type of large cruising airliner that is fed by much smaller feeder planes. In this scenario, the larger cruisers fly fixed routes over large distances, while the smaller feeders exchange passengers, crew and cargo with the cruiser in midair. One major challenge with the scheme is that the cruiser aircraft will require an incredible durable engine with low fuel consumption. Such a system does not seem to be economically feasible using current chemically fuelled jet engines. The greater amounts of fuel to be stored has to be offset by a larger engine and airframe, which naturally increases the loads on components in turn requiring thicker sections and structures. Thus, with current gas-fuelled engines you are very much caught in the downward payload spiral that is so frustrating in rocketry.

    But what if the cruisers are propelled by nuclear engines? Well the efficiency of the system improves significantly. In fact the efficiency gains are so great that a large cruiser could fly continuously for a whole year just on a few litres of gasoline. Powered by nuclear fusion a cruiser could stay airborne for months, and passengers could hop on and off a continuously airborne global fleet of international airlines.

    And it turns out that in October 2014 Lockheed Martin’s Skunk Works announced that they could have a prototype fusion reactor ready within five years and a working production engine within ten.  The obvious “buts” are that that a fusion process requires temperatures in the millions of degrees in order to separate ions from electrons which creates hot plasma in the process. In fusion the danger is not a nuclear fallout as is the case in fission. The problem with fission engines is that they require shielding to protect passengers and also carry the dangers of spreading radioactive material in the event of a crash. In a fusion engine the difficulty is in stabilising the plasma and safely containing it in the reactor to guarantee the fusion of ions. The Skunk Works are currently working on an eloctro-magnetic suspender system to guarantee a stable reaction. Furthermore, neutrons that are emitted in the fusion process can damage the materials in the containing structure and turn them radioactive. Thus materials that minimise this radioactivity are needed. Finally, the fusion reactors need to be miniaturised from the scale of family houses to something more akin of an SUV. In that event fusion reactors will also become an interesting propulsion method for spaceships and other spacecraft that have limited space for power generation.

    While this is all science fiction for now it presents an interesting option for facilitating a global metro-style airline system. And how cool would that be?

  • Human Fallibility in Aviation II: Case Study

    Vanity Fair recently featured an excellent article on Air France Flight 447 that crashed into the Atlantic in 2009. It is a long read, but if you have 30 min to spare it will be a great educational investment.

    The author, William Langewiesche, does a good job at weaving multiple aspects of aeronautics, such as cockpit design, ergonomics, the physics of flight and pilot training, into a story that is ultimately about the role of human fallibility in a system that is governed by automation. This is a topic that I find highly fascinating and will only become more pertinent in the future as computers take over increasing number of tasks in the cockpit. In fact, the psychological impact on the pilots and the effect of automation on the piloting profession on a whole remain uncertain.

    The article features extensive coverage of the pilots’ conversation and provides a riveting account of what transpired in the cockpit prior to the crash. In this way the article brings to light some of the human misjudgements that ultimately led to the catastrophe. On some occasions I found myself cringing at the incredulity of the events that transpired, futilely hoping that the pilots would turn the situation around and save the 228 passengers onboard, while fully aware that hindsight makes all mistakes appear tauntingly clear.

    The reason for the plane crash was a classic case of aerodynamic stall brought on by the pilot climbing too quickly and exceeding the critical angle of attack, depending on the operating conditions in the range of 13-16°. Even when the angle of attack was at an incredible 41°, the aircraft was rolling from side to side, the alarm system was screaming “STALL”, the cockpit was shaking violently due to the turbulent flow separation over the wings and the aircraft was losing altitude at a rate of 4,000 feet per minute, each one a tale-tell signs of aerodynamic stall, the pilots did not know what was happening with the airplane!

    What brought the aircraft into this situation in the first place? The pitot static tube used as sensors for the flight speed had been clogged by a hail storm, which automatically took the fly-by-wire system out of the auto-pilot, disabled the automatic stall recovery system and returned the controls back to the pilots. At this point had the pilots continued the modus operandi of keeping the aircraft at the same altitude with the engines at constant thrust, nothing would have happened. It is ironic, that the only thing the pilots needed to do to keep the plane safely in the air was nothing. It is unclear why one of the pilots decided to climb to a higher altitude and especially why this was done so rapidly, but this ultimately triggered the aerodynamic stall of the wings.

    William Langewiesche argues that increasing automation “de-skills” pilots, essentially rendering them incapable of flying an aircraft without support systems. I find the following section especially interesting:

    “For commercial-jet designers, there are some immutable facts of life. It is crucial that your airplanes be flown safely and as cheaply as possible within the constraints of wind and weather. Once the questions of aircraft performance and reliability have been resolved, you are left to face the most difficult thing, which is the actions of pilots. There are more than 300,000 commercial-airline pilots in the world, of every culture. They work for hundreds of airlines in the privacy of cockpits, where their behavior is difficult to monitor. Some of the pilots are superb, but most are average, and a few are simply bad. To make matters worse, with the exception of the best, all of them think they are better than they are. Airbus has made extensive studies that show this to be true.”

    So how has this been dealt with in the past?

    “First, you put the Clipper Skipper [daring WW II fighter pilots] out to pasture, because he has the unilateral power to screw things up. You replace him with a teamwork concept—call it Crew Resource Management—that encourages checks and balances and requires pilots to take turns at flying. Now it takes two to screw things up. Next you automate the component systems so they require minimal human intervention, and you integrate them into a self-monitoring robotic whole. You throw in buckets of redundancy. You add flight management computers into which flight paths can be programmed on the ground, and you link them to autopilots capable of handling the airplane from the takeoff through the rollout after landing. You design deeply considered minimalistic cockpits that encourage teamwork by their very nature, offer excellent ergonomics, and are built around displays that avoid showing extraneous information but provide alerts and status reports when the systems sense they are necessary. Finally, you add fly-by-wire control. At that point, after years of work and billions of dollars in development costs, you have arrived in the present time. As intended, the autonomy of pilots has been severely restricted, but the new airplanes deliver smoother, more accurate, and more efficient rides—and safer ones too.”

    This essentially causes a shift in the piloting profession…

    “In the privacy of the cockpit and beyond public view, pilots have been relegated to mundane roles as system managers, expected to monitor the computers and sometimes to enter data via keyboards, but to keep their hands off the controls, and to intervene only in the rare event of a failure. As a result, the routine performance of inadequate pilots has been elevated to that of average pilots, and average pilots don’t count for much[…]Once you put pilots on automation, their manual abilities degrade and their flight-path awareness is dulled: flying becomes a monitoring task, an abstraction on a screen, a mind-numbing wait for the next hotel.[…] For all three [pilots on Air France Flight 447], most of their experience had consisted of sitting in a cockpit seat and watching the machine work.”

    We all know that automation is indispensable going forward. It is too valuable a system and has made aviation the safe mode of transport it is today. However, the issues raised above will need to be addressed within the near future. Possible solutions may be requiring pilots to turn off auto-pilot for a certain number of flights, while another approach may be to improve the machine-human interaction in the cockpit. In either case, I think it is important to point out that catastrophes such as Air France Flight 447 are outliers, black swans, six-sigma events that are not likely to repeat again in the same detail. In fact, the roots of the next catastrophe may lie somewhere completely different and thus are impossible to predict.

    References

    [1] William Langewiesche, “The Human Factor”, Vanity Fair, October 2014. http://www.vanityfair.com/business/2014/10/air-france-flight-447-crash

  • Human Fallibility in Aviation

    This blog has focused much on the technical side of aviation. One of the biggest drivers in civil aviation is passenger safety and the last 40 years have brought tremendous advances on this front, with aviation now being the safest mode of transport. A lot of this has to do with the deep understanding engineers have about the strength of materials (static failure, fatigue and stability), the complexities of airflow (eg. stall), aeroelastic interaction (eg. flutter and divergence) and the control of aircraft. Furthermore, appropriate systems have been put in place do deal with uncertainty and monitoring the structural health of aircraft.

    Anyone who has been inside a commercial aircraft cockpit can appreciate the technology that goes into controlling a jumbo jet. The amount of switches, levers and lights is mind-boggling. A big part of the high-tech that goes into commercial aircraft are automated control systems that keep the aircraft up in the air and automate parts of flight that require little input from pilots (eg. cruising at altitude). One could argue that human beings are fallible systems and therefore we should relinquish as much control as possible to automated computer systems. Get the computer to do everything it can and only allow humans to intervene in situations that require human judgement. In short if it’s technically possible, let’s automate.

    Complexity in the cockpit
    Complexity in the cockpit

    The problem with this argument is that automating a process does not completely remove humans from the picture. If any form of human interaction is required at some point, the pilot still needs to be vigilant at all times in order to be ready to act swiftly when needed. Only focusing on automation and forgetting about the human-system interaction is bound to get us into trouble. This is a great risk of modern day specialisation. Focusing solely on your niche of the problem and forgetting factors from other scientific disciplines – “For a man with a hammer, everything looks like a nail”.

    So, we require more than a hammer in our toolbox. Until we have automated the whole flight envelope to statistical perfection we need to be thinking about the way that systems and humans interact in the cockpit. Guaranteeing infallibility of the technical side is not enough. In fact, the aerospace industry was one of the first to introduce checklists into cockpits that are used to guide the pilots through specific manoeuvres and prevent avoidable mistakes and procedures that are easily overlooked or forgotten under pressure. It is incredible how successful you can be by continuously trying not to be stupid. The checklist system has worked so well that it is now being used in hospitals with amazing results. In the same manner the interaction between machine and humans has a lot to do with human psychology. As engineers we are generally aware of ergonomic design in order to create functional and user-friendly products. I have yet to see a university course that teaches the psychology of automation or human misjudgement in general to engineering students. 

    However, it is not hard to imagine what automation can do to our brains. For anybody that uses cruise control in their cars, are you more or less likely to remain vigilant once the cruise control is set and you’ve taken the foot off the accelerator? I think it’s fair to say that most people will lose focus on what’s happening on the road once they are less engaged. In this way the risk in automation is that it can lead to boredom and loss of attention to detail. This is especially dangerous if we have been lulled into a sense of false comfort and start relinquishing all control in the belief that the system will take care of everything.

    Now why am I bringing this up? Because for exactly these reasons Flight 3407 lost control (aerodynamic stall) and crashed in 2009, killing everyone on board. According to the National Transportation Safety Board the likely cause of the accident were, “(1) the flight crew’s failure to monitor airspeed in relation to the rising position of the low-speed cue, (2) the flight crew’s failure to adhere to sterile cockpit procedures, (3) the captain’s failure to effectively manage the flight, and (4) Colgan Air’s inadequate procedures for airspeed selection and management during approaches in icing conditions. [1]” Apart from the fourth reason everything suggests a simple failure to pay attention. The pilot had not noticed that the airplane lost air speed during automated decent. Upon being alerted by the stick shaker, an anti-stall system, he inadvertently pulled the shaker in the wrong direction thereby further reducing airspeed and stalling the plane from it could not recover. In fact, a 1994 National Transportation Safety Board review of thirty-seven accidents involving airline crews found that in 84% of the cases inadequate monitoring of controls was a contributing factor.

    There is a lot to learn from these failures and given the excellent track record of the aviation industry these findings will undoubtedly lead to better procedures. However, apart from better procedures we also need to holistically educate the engineers of tomorrow to look past purely technical design and incorporate research from psychology. Research into how this is best achieved is currently ongoing but for now there is something we can all take away from this: don’t simply automate something because we can, but because we should.

    References

    [1] http://www.ntsb.gov/doclib/reports/2010/aar1001.pdf

     

  • Loads Acting on Aircraft

    The flight envelope of an aeroplane can be divided into two regimes. The first is rectilinear flight in a straight line, i.e. the aircraft does not accelerate normal to the direction of flight. The second is curvilinear flight, which, as the name suggests, involves flight in a curved path with acceleration normal to tangential flight path. Curvilinear flight is often known as manoeuvring and is of greater importance for structural design since the aerodynamic and inertial loads are much higher than in rectilinear flight.

    As the aircraft moves relative to the surrounding fluid a pressure field is set up over the entire aircraft, and not only over the wings, that acts to keep the aircraft afloat. This aerodynamic pressure always acts normal to the outer contour of the skin but the resultant force can be resolved into two forces acting tangential and normal to the direction of flight. The sum of the forces normal to the direction of flight give rise to the lift force L, which offsets the weight of the aircraft i.e. offsets the weight of the aircraft W. The tangential components give the resultant drag force D, which in powered flight must be overcome by the propulsive force F. The resultant force F includes the thrust generated by the engines, the induced drag of the propulsive system and the inclination of the line of thrust to the direction of flight. In basic mechanics the aircraft is simplified into a point coincident with the centre of gravity (CG) of the aircraft with all forces assumed to act through the centre of gravity. If the net resultant of a force is offset from the CG then a resultant moment will also act on the aircraft. For example, the lift generated by the wings is generally offset from the centre of gravity of the aircraft and may thus produce a net pitching moment that has to be offset by the control surfaces. Figure 1 below shows as a simplified free body diagram of an aircraft in level flight, climb and descent.

    Figure 1. Free body diagram of aircraft in flight (1)
    Fig. 1. Free body diagram of aircraft in flight (1)

    Note that the lift is only equal and opposite to the weight in steady and level flight, thus:

    [latex] F = D [/latex] and [latex] L = W [/latex]

    In steady descent and steady climb the lift component is less than the weight, since only a component of the weight acts normal to the direction of flight and because by definition lift is always normal to both drag and thrust. Also in climbing the thrust must be greater than the drag to overcome the component of weight acting against the direction of flight and vice versa in descent. Thus in a climb:

    [latex] L = W \cos \gamma_c [/latex] and [latex] F = D + W \sin \gamma_c [/latex]

    and in descent

    [latex ]L = W \cos \gamma_d [/latex], [latex] F = D – W \sin \gamma_d [/latex]

    This situation is schematically represented in Figure 1 by the relative sizes of the different arrows. In general we can imagine the weight being balanced by the lift force L and the difference between the thrust F and the drag D.  A bit of manipulation of the two equations for climb or descent above gives the same expression,

    [latex] L^2 + (F-D)^2 = W^2 \cos^2 \gamma_c + W^2 \sin^2 \gamma_c [/latex]

    such that,

    [latex] W = \sqrt{L^2 + (F-D)^2} [/latex]

    The latter expression is clearly obtained if Pythagoras’ rule is applied to the vector triangles that include (F-D) and L in Figure 1.

    Figure 1 also shows velocity diagrams depicting the relationship between true air speed V, tangential to the direction of flight, and the rates of climb and descent [latex]v_c[/latex] and [latex] v_d[/latex] respectively. We can combine these velocity triangles with the forces triangles to obtain simple equations for the rates of climb and descent,

    [latex] \sin \gamma_c = \frac{F-D}{W} [/latex] and [latex] \sin \gamma_c = \frac{v_c \ or \ v_d}{V} [/latex]

    such that [latex] v_c [/latex] or [latex] v_d = \frac{F-D}{W} V [/latex].

    This expression can also be used to gain some insight into the driving factors behind gliding flight. In this case the net propulsive force F is zero such that the expression becomes,

    [latex] v_d = -\frac{D}{W} V [/latex] which may be approximated to [latex] v_d = -\frac{D}{L} V [/latex] since the angle of descent in gliding is typically very shallow. Therefore the gliding efficiency of a sailplane depends on maximising the lift to drag ratio L/D. If the ascending thermals are equal to or greater than this rate of descent than the glider can continuously maintain or even gain in altitude.

    An aircraft may of course increase its speed along the direction of rectilinear flight in which case the thrust force F must be greater than the vector sum of the drag and the component of the weight. A more interesting scenario are accelerated flight where the acceleration occurs as a result in change in direction rather than a change in speed. By definition, in vector mechanics a change in direction is a change in velocity and therefore defined as acceleration, even if the magnitude of the speed does not change. A change in the flight path is achieved by changing the magnitude of the overall lift component or by differences in lift between the two wings, away from the equilibrium condition depicted in Figure 1. This change can either be obtained by a change in true airspeed or by changing the angle of attack of the wings relative to the airflow. Consider the simple banked turn in Figure 2 below.

    Fig. 2. Free Body Diagram of an aircraft in a banked turn (1)
    Fig. 2. Free Body Diagram of an aircraft in a banked turn (1)

    As the aircraft banks the lift force normal to the wings is turned through an angle [latex] \theta [/latex] from the vertical weight vector. Since the centripetal acceleration acts horizontally and the weight acts vertically we can use simple trigonometric relations to find the radius of turn:

    [latex] \tan \theta = \frac{F_{centripetal}}{W} = \frac{m V^2 / R }{m g} [/latex] such that [latex] R = \frac{V^2}{g \tan \theta} [/latex]. It is also obvious that the more steeply banked the turn the more lift will be required from the wings since,

    [latex] L = \frac{W}{\cos \theta}[/latex]

    such that increase in engine power is needed to maintain constant speed under this flight condition. This is one of the reasons why fighter jets that require manoeuvres with very tight radii have such short and stubby wings. Small radii if turn R and thus high banking angles [latex] \theta [/latex] require increases in lift and therefore increase the bending moments acting on the wings.

    In reality the airplane is subjected to a large variety of different combinations of accelerations (rolls, pull-ups, push-overs, spinning, stalling , gusts etc.) at different velocities and altitudes. In classical mechanics free fall is expressed as having an acceleration 0f -1g and level flight is denoted as 0g. The aeronautical engineer differs from this convention in order to make the comparison between lift and weight simpler. This means that free fall is denoted by 0g and level flight by 1g. The ratio between lift and aircraft weight is called the load factor n, where [latex] n = \frac{L}{W} [/latex], i.e. n = 0 for free fall, n = 1 for level flight, n > 1 to pull out of a dive and n < 1 to pull out of a climb. The overall load spectrum of an aircraft is captured graphically by so called velocity – load factor (V-n) curves. The outline of these diagrams are given by the possible combinations of load factor and velocity than an aircraft will be expected to cope with. For example Figure 3a shows the basic V-n diagram for symmetric flight (asymmetric envelopes exist for rolls etc. but are not covered here).

    Fig. 3 The basic manoeuvre and gust flight envelopes (1)
    Fig. 2 The a) basic manoeuvre and b) gust flight envelopes (1)

    The envelope is constructed from the positive and negative stall lines which indicate, respectively, the maximum and minimum load that can be achieved because of the inability of the aircraft to produce any more lift. Thus,

    [latex] L = n W = \frac{1}{2}C_{L_{max}} \rho V^2 S [/latex]

    where [latex] \rho [/latex] is the density of the surrounding air and [latex] S [/latex] is the wing surface area. The limiting factor [latex] n_l [/latex] also known as the maximum expected service load is defined by

    [latex] n_l = 2.1 + \frac{24 000}{W + 10 000} [/latex] or 2.5, whichever is greater, with W the max take-off weight.

    [latex] V_A [/latex], [latex] V_C [/latex] and [latex] V_D [/latex]  are defined as the maximum manoeuvre speed ( the speed above which it is unwise to make full application of any single flight control), the design cruise speed and the maximum dive speed, respectively. The intersection between the horizontal line [latex] n = 1 [/latex] and the left curve of the envelope is also of special significance since it represents the stall speed at level flight. In general the limit load factor must be tolerable without detrimental permanent deformation. The aircraft must also support an ultimate load (=limit load x safety factor) for at least 3 seconds. The safety factor is generally taken to be 1.5.

    Finally, Figure 3b shows a typical gust envelope. A gust alters the angle of attack of the lifting surfaces by an amount equal to [latex] \tan^{-1} (w/V) [/latex] where w is the vertical gust velocity. Since the lift scales with the angle of attack up to the point of aerodynamic stall, the inertia forces applied to structure are altered by the gust winds. The gust envelope is constructed with the same stall lines as the basic manoeuvre envelope and different gust lines are drawn radiating from n = 1 at V = 0. Note that the design gust intensities reduce as the velocity increases, with the intention that the aircraft is flown accordingly. In the gust envelope [latex] V_A [/latex] is replaced with [latex] V_B [/latex], representing the design speed at maximum gust intensity.

     

    References

    (1) Stinton, D. The Anatomy of the Airplane. 2nd Edition. Blackwell Science Ltd. (1998).