Economics of sustainable energy 9781119525929, 1119525926

"First book that takes an objective and unbiased view of all modern economic theories - Offers a dogma-free scienti

1,806 130 11MB

English Pages 628 [584] Year 2018

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Economics of sustainable energy
 9781119525929, 1119525926

Table of contents :
Content: Delinearized history of economics and energy --
The incompatibility of conventional economic analysis tools with sustainability models --
State-of-the art of current technology development --
Comprehensive analysis of energy sustainability --
The Islamic track of economical analysis --
Framework of economics of sustainable energy --
Economics of sustainable energy operations --
Role of government in assuring energy sustainability.

Citation preview

Contents Cover Title page Copyright page Dedication Preface Chapter 1: Introduction 1.1 Opening Remarks 1.2 Research Questions Asked Chapter 2: Delinearized History of Economics and Energy 2.1 Introduction 2.2 The European Tract of Economics History 2.3 Transition of Money 2.4 The Nature Science Tract of Economics History 2.5 Connection to Energy Chapter 3: The Incompatibility of Conventional Economic Analysis Tools with Sustainability Models 3.1 Introduction 3.2 Current Economic State of the World 3.3 The Status of the Money God 3.4 The Current Economic Models 3.5 The Illogicality of Current Theories 3.6 The Delinearized History of Modern Economics 3.7 The Transition of Robotization 3.8 Yellow Gold vs. Black Gold 3.9 How Science and Economics Mimic the Same Aphenomenality Notes Chapter 4: State-of-the Art of Current Technology Development 4.1 Introduction 4.2 Denaturing for a Profit 4.3 Aphenomenal Theories of the Modern Era 4.4 The Sugar Culture and Beyond 4.5 The Culture of the Artificial Sweetener

4.6 Delinearized history of Saccharin® and the Money Trail 4.7 The Culture of Aspartame 4.8 The Honey-Sugar-Saccharin-Aspartame Degradation in Everything 4.9 Assessing the Overall Performance of a Process Chapter 5: Comprehensive Analysis of Energy Sustainability 5.1 Introduction 5.2 Sustainability in the Information Age and Environmental Insult 5.3 Climate Change Hysteria 5.4 The Energy Crisis 5.5 Petroleum in the Big Picture 5.6 Science of Healthy Energy and Mass Chapter 6: The Islamic Track of Economical Analysis 6.1 Introduction 6.2 Function of Gold Dinars and a New Paradigm for Economic Analyses 6.3 Labor Theory of Value 6.4 Zero Waste Economy 6.5 Role of Government in State’s Economy 6.6 Macroeconomy and Theory on Money 6.7 The Optimum Lifestyle 6.8 The Gold Standard for Sustainable Economy Chapter 7: Framework of Economics of Sustainable Energy 7.1 Introduction 7.2 Delinearized History of Modern Age 7.3 Petroleum Refining and Conventional Catalysts 7.4 The New Synthesis 7.5 The New Investment Model, Conforming to the Information Age Chapter 8: Economics of Sustainable Energy Operations 8.1 Introduction 8.2 Issues in Petroleum Operations 8.3 Critical Evaluation of Current Petroleum Practices 8.4 Greening of Petroleum Operations 8.5 Zero-Waste Operations 8.6 Characteristic Time 8.7 Quality of Energy

Chapter 9: Role of Government in Assuring Energy Sustainability 9.1 Introduction 9.2 The U.S. Government 9.3 The Wealth Paradigm 9.4 Zero Interest 9.5 Zero-Waste Economics Chapter 10: Summary and Conclusions 10.1 Summary 10.2 Answers to the Research Questions Chapter 11: References and Bibliography Index End User License Agreement

List of Illustrations Chapter 1 Figure 1.1 Knowledge model vs. aphenomenal model. Figure 1.2 Congressional approval rate for last 32 years (Gallup data). Figure 1.3 Nature is inherently sustainable (From Khan and Islam, 2007). Chapter 2 Figure 2.1 If truth criteria are not met, time only increases ignorance and false confidence. Figure 2.2 New science only added more arrogance, blended with ignorance to Dogma science. Figure 2.3 Exchange of goods can cause economic growth or collapse, depending on the starting point (solid circle: natural starting point; hollow circle: unnatural/artificial starting point). Figure 2.4 Rise of bitcoin transactions. Figure 2.5 Miner fee rise in 2016–2017 (from website 1). Figure 2.6 Price of Bitcoin from inception to the end of 2017. Figure 2.8 World reserve of gold. (from Gold Reserve, Inc.) Figure 2.9 US Annual gold production. (From California Gold Mining, Inc.) Figure 2.10 Gold price fluctuations since 1971 decoupling of gold and US dollars (From onlygold.com).

Figure 2.11 Inflation adjusted price of gold (from Rowlatt, 2013). Figure 2.12 U.S. mine production of silver from 1860 to 2000. Total production prior to 1860 was estimated to be 25 metrics tons (t) (Data from USGS, 2018a). Figure 2.13 Silver reserve in top silver reserve countries (from Statistia, 2018c). Figure 2.14 Silver/gold price in USA (from https://goldprice.org/gold-pricehistory.html). Figure 2.15 Gold/Silver ratio distribution (From Fulp, 2016). Figure 2.16 Showing rarest metals (from Haxel et al., 2002). Figure 2.17 Variation in “gold dollar” and “petrodollar” (left y-axis represents gold price in $/oz, while right y-axis represents crude oil price (in $/bbl), from American Bullion, Inc. Figure 2.18 Variation of gold price and oil price during 1987–2012, From American Bullion, Inc. Chapter 3 Figure 3.1 The rich are benefitting the most from the stock market’s historic run (Wile, 2018). Figure 3.2 Past and future projection of shares of global wealth of the top 1% and bottom 99% (Oxfam Report, 2018). Figure 3.3 Past shares of global wealth of the top 1% and bottom 99% (from Khan and Islam, 2016). Figure 3.4 Economic disparity and movement of wealth in USA. Figure 3.5 Economic disparity in USA Bottom (visible) pink line is the top 10% (original data from Credit Suisse, 2014). Figure 3.6 National debt increase during various presidencies. Figure 3.7 Dept. of Commerce, Bureau of Economic Analysis. Figure 3.8 Our current epoch is an epic failure of intangible values. Figure 3.9 The change in human collective activities from 1750 to 2000 (from Adams and Jeanrenaud, 2005). Figure 3.10 Lung cancer mortality rate in Sweden Cancer Trends During the 20th Century (Hallberg, 2002). Figure 3.11 Rise in obesity (OECD analysis of health survey data, 2011). Figure 3.12 Incidence of diabetes in children under age 10 years in Norway, 1925– 1995 (from Gale, 2002). Figure 3.13 Robotization of humanity: the current state of the world.

Figure 3.14 US defense spending in recent history (From John Fleming/The Heritage Foundation, 2018). Figure 3.15 Healthcare cost as a percentage of GDP for various developed countries (OECD Report, 2017). Figure 3.16 Price variation in diabetes treatment chemicals (OECD Report, 2017). Figure 3.17 Aggregated revenues reported by pharmaceutical and biotechnology companies from 1991 to 2014 (OECD Report, 2017). Figure 3.18 Percentage of children (aged 0 to 17) who are living in relative poverty, defined as living in a household in which disposable income, when adjusted for family size and composition, is less than 50% of the national median income. https://www.theguardian.com/us-news/2017/oct/19/big-pharma-money-lobbying-usopioid-crisis. Figure 3.19 Both optimism and fear lead to movements in the financial market, thereby stimulating economy that is stacked up against sustainability. Figure 3.20 Modern science and technology development schemes focus on turning natural into artificial and assigning artificial values, proportional to the aphenomenality of a product. Figure 3.21 Lewis Dual Economy thrives on the existence of inherent disparity. Figure 3.22 Sustainability can be defined as the inevitable outcome of a conscientious start (1: phenomenal start with phenomental intention; 2: aphenomenal start and/or aphenomenal intention). Figure redrawn from Khan and Islam (2016). Figure 3.23 Good behaviors in humans lie within optimum regime of individual liberty (redrawn from Islam et al., 2017). Figure 3.24 Origins of the Arabic word for “happiness” – a non-Eurocentric view. Figure 3.25 Maximizing the Rate of Return on Investments for Others – This figure illustrates one prospect that becomes practically possible if intangible benefits are calculated into, and as part of, a well-known conventional treatment of investment capital that was developed initially to deal purely with tangible aspects of the process and on the assumption that money would normally be invested only to generate a financial return to its investor (From Zatzman and Islam, 2007). Figure 3.26 Sensitivity of business turnover to employer-employee trust – Under a regime guided by the norms of capital-dependent conventional economics, trustworthiness counts for nothing. Under an economic approach that takes intangibles into account, on the other hand, revenue growth in an enterprise should be enhanced. Figure 3.27 The figure depicts the direction of ‘legal derivation’, the process by which law is discerned, for each of the theorists’ models. Figure 3.28 Both scientific and social theories have invoked aphenomenal premises

that have become increasingly illogical. Figure 3.29 Real demand reaches equilibrium whereas artificially created demand leads to implosive economic infrastructure that ends up with a crisis. Figure 3.30 Knowledge has to be true; otherwise it will create false perception and total opacity in the economic system. Figure 3.31 From ill intention and ill-gotten power comes the onset of the cancer model. It gains traction increasing misery of the general population upon which the ‘Aphenomenal Model’ is being applied. Figure 3.32 Every ‘-ism’ introduced in the modern age belongs to the same false premise that launched the current civilization in the deliberate hunger game model. Figure 3.33 The great debate rages on mostly as a distraction from the real debate that should be on fundamental premises that are taken at face value. Figure 3.34 Interest rate over the years (shaded area US recessions) data from Research. Stlouisfed.org. Figure 3.35 Historical fluctuation of inflation rates in USA (redrawn from Website 1). Figure 3.36 Interest rate and inflation rate. Figure 3.37 Interest rate and inflation rate over the years (from Federal Reserve Economic Data). Figure 3.38 Philip’s short-run graph. Figure 3.39 Taxes as a percentage of GDP for various countries. The darker bar is the US; the darkest bar the average for advanced countries (OECD Report, 2017). Figure 3.40 Pictorial depiction estates with tax concerns (from Krugman, 2017). Figure 3.41 The Rahn curve. Figure 3.42 Taxation is part and parcel of the government growth. Figure 3.43 History of government employment and manufacturing employment (From Jeffrey, 2015). Figure 3.44 Government size or government debt hasn’t been a partisan issue. Figure 3.45 Transition from real value to perceived value of commodities (data from Commodity Futures Trading Commission website). Figure 3.46 Falsehood is turned into truth and ensuing disinformation makes sure that truth does not come back in subsequent calculations. Figure 3.47 A new paradigm is invoked after denominating spurious value as real and disinformation into ‘real knowledge’. Figure 3.48 How falsehood is promoted as truth and vice-versa.

Figure 3.49 Inflation rate and world events. Figure 3.50 Gold prices throughout modern history in US $. (from Onlygold.com). Figure 3.51 Various uses of gold (redrawn from Thomas, 2015). Figure 3.52 The Dow Jones Industrial Average from the years 1928/9–1934, with annotations describing brief historical-economic events (modified from: Velauthapill 2009). Figure 3.53 Illustrates the long-term inflation adjusted price of gold based on 2011 dollars (data from MeasuringWorth.com, as reported by Ferri, 2013). It highlights the average $500 per ounce price over the 220 year period, which was passed through many times. Figure 3.54 Inflation adjusted oil price for last 70 years. Figure 3.55 Ratio of gold price (per ounce) over oil price (per barrel). Grey areas mark recession periods. Figure 3.56 Net development (true GNP per capita, after subtracting foreign debt payments & re-exported profits of TNC’s, etc.) and net dependency for various countries. Chapter 4 Figure 4.1 Economic activities have become synonymous with corporate profiteering and denaturing of the society. Figure 4.2 The outcome of short-term profit-driven economics model. Figure 4.3 Millions of tons of sugar produced globally over the years (from Website 2). Figure 4.4 Sugar production history by region (From Islam et al, 2015). Figure 4.5 Sugar structure (note how all catalysts disappear). Figure 4.6a Chemical structure of saccharin and related salts. Figure 4.6b Chemical reactions used during saccharin manufacturing. Figure 4.7 Saccharin consumption share in 2001 (From Khan and Islam, 2016). Figure 4.8 The dominance of saccharin has been continuing in last 3 decades. (From Islam et al., 2015) Figure 4.9 Cost per tonne for various sugar products (2003 value), from Islam et al., 2015). Figure 4.10 Aspartame market growth since 1984 (From Khan and Islam, 2016). Figure 4.11 Market share of various artificial sweeteners (from Islam et al., 2015). Figure 4.12 Chemical structure of Aspartame®.

Chapter 5 Figure 5.1 Public perception toward energy sources (Ipsos, 2011). Figure 5.2 Energy outlook for 2040 as compared to 2016 under various scenarios (*Renewables includes wind, solar, geothermal, biomass, and biofuels, from BP Report, 2018). Figure 5.3 World plastic production (From Statista, 2018). Figure 5.4 Annual per capita water consumption in metric ton in 2013 (from Statista, 2018a). Figure 5.5 (from USGS, 2017). Figure 5.6 Oil dependence of various countries (From Hutt, 2016). Figure 5.7 Spider Chart of Saudi Arabia (When comparing multiple countries on a spider). Figure 5.8 Breakdown of Saudi Arabia’s prosperity index. Figure 5.9 Norway’s prosperity index in spider chart form. Figure 5.10 Breakdown of Norway’s prosperity index. Figure 5.11 Oil dependence in terms of GDP share and historical oil prices (World Bank, 2017). Figure 5.12 Trends in GDP and Energy intensity. Figure 5.13 The bell curve has been the base curve of many theories in modern era (xaxis is replaced with time and y-axis with global oil production). Figure 5.14 Population growth history and projection (data from CIA Factbook, UN). Figure 5.15 Estimated, actual, and projected population growth (decline). Figure 5.16 World population growth for different continents. Figure 5.17 There are different trends in population growth depending on the state of the economy. Figure 5.18 Per capita energy consumption growth for certain countries. Figure 5.19 A strong correlation between a tangible index and per capita energy consumption has been at the core of economic development (from Goldenberg, 1985). Figure 5.20 While population growth has been tagged as the source of economic crisis, wasteful habits have been promoted in name of emulating the west. Figure 5.21 Population and energy paradox for China (From Speight and Islam, 2016). Figure 5.22 Oil production and import history of USA (data from EIA). Figure 5.23 US data that appear to support Hubbert’s “peak oil” hypothesis (From

Speight and Islam, 2016). Figure 5.24 Comparison of Hubbert curve with Norwegian oil production (from Speight and Islam, 2016). Figure 5.25 Association for the study of peak oil (ASPO) produced evidence of Hubbert peak in all regions. Figure 5.26 Actual global oil production (surface mined tar sand not included). Figure 5.28 Production-Cost and Market-Price Realities “At The Margin” (From Zatzman and Islam, 2007). Figure 5.29 US public debt as percentage of GDP. Figure 5.30 Energy content of different fuels (MJ/kg), from Spight and Islam, 2016. Figure 5.31 Fossil fuel reserves and exploration activities. Figure 5.32 Discovery of natural gas reserves with exploration activities (From Islam, 2014). Figure 5.33 Natural gas production history in New York state (from Islam, 2014). Figure 5.34 Locations of unconventional shale plays in lower 48 states (from Ratner and Tiemann, 2014). Figure 5.35 Moving from conventional to unconventional sources, the volume of the petroleum resource increases. Figure 5.36 Cost of production increases as efficiency, environmental benefits and real value of crude oil declines (modified from Islam et al., 2010). Figure 5.37 Current estimate of conventional and unconventional gas reserve (From Islam, 2014). Figure 5.38 Abundance of natural resources as a function of time. Figure 5.39 Water plays a more significant role in material production than previously anticipated (from Islam, 2014). Figure 5.40 Gas hydrate deposits of Alaska (From Islam, 2014). Figure 5.41 Known and inferred natural gas hydrate occurrences in marine (red circles) and permafrost (black diamonds) environments (From Islam, 2014). Figure 5.42 Future trends in some of the major future user of unconventional gas (from EIA report, 2013). Figure 5.43 Gas hydrates that are the largest global sink for organic carbon offer the greatest prospect for the future of energy (From Islam, 2014). Figure 5.44 Water: a source of life when processed naturally but a potent toxin when processed mechanically.

Figure 5.45 Aristotle’s four-element phase diagram (steady-state). Figure 5.46 Divinity in Europe is synonymous with uniformity, symmetry, and homogeneity, none of which exists in nature. Figure 5.47 Recasting Figure 5.45 with the proper time function. Figure 5.48 Water and fire are depicted through taegeuk (yin yang). Figure 5.49 Korean national flag contains ancient symbol of creation and creator. Figure 5.50 Combination of various fundamental elements make up the rest of the creation (From Islam, 2014). Figure 5.50 Evolution of Yin and Yang with time (from Islam, 2014). Figure 5.51 Sun, earth, and moon move at a characteristic speed in infinite directions. Figure 5.52 Orbital speed vs size (not to scale). Figure 5.53 The heart beat (picture above) represents natural frequency of a human, whereas brain waves represent how a human is in harmony with the rest of the universe (From Islam et al., 2015). Figure 5.54 Maximum and minimum heart rate for different age groups (From Islam et al., 2015). Figure 5.55 Tangible/intangible duality continues infinitely for mega-scale to nanoscale, from infinitely large to infinitely small. Chapter 6 Figure 6.1 A large market means consisting of high demand (D1) compared to small market (D0) even at a different price level. This also causes large investment, in turn causing high supply (S1). Through the cost and return function, a large market generates large income as well (from Koutsoyiannis, 1979). Figure 6.2 Derivation of demand. Graphical presentation of Utility function by Thomas Malthus and Alfred Marshall. Derived from Total Utility (TU) curve, Marginal Utility is congruent with Demand curve (D) against price and quantity (from Koutsoyiannis, 1979). Figure 6.3 Cost Push and Demand Pull Inflation. Economists agree that increase in cost – as illustrated by shifting Aggregate Supply upward (AS0–AS1) causes increase in general price level. From P0 to P1. Similar effect occurs when there is an increase in Aggregate Demand – illustrated by shifting upward of AD curve (AD0–AD1) (from Branson, 1989). Figure 6.4 Difference between zero-interest economy and interest-based economy is glaring. Figure 6.5 Islamic society finds an optimum between individual liberty and regulatory

control. Figure 6.6 Good intention launches off knowledge-based model where as a bad intention throws off the cognition to ignorance and prejudice. Figure 6.7 Intentions are the driver of sustainability. Figure 6.8 Niyah is original intention, whereas qsd is dynamic intention. Figure 6.9 Summary of Islamic economy vis-à-vis modern economy. Chapter 7 Figure 7.1 Documenting pathways by which intangible natural gifts are destroyed by being converted into tangibly valuable commodities. Figure 7.2 Economic models have to retooled to make price proportional to real value. Figure 7.3 Summary of the historical development of the major industrial catalytic processes per decade in the 20th century (from Fernetti et al., 2000). Figure 7.4 Natural chemicals can turn an sustainable process into a sustainable process while preserving similar efficiency. Figure 7.5 Trend of long-term thinking vs. trend of short-term thinking. Figure 7.6 Bifurcation, a familiar pattern from the chaos theory, is useful for illustrating the engendering of more degrees of freedom in which solutions may be found as the “order” of the “phase space,” or as in this case, dimensions, which increase from one to two to three to four. Figure 7.7 In the knowledge dimension, data about quarterly income over some selected time span displays all the possibilities – negative, positive, short-term, longterm, cyclical, etc. Figure 7.8 Linearization of economic data. Figure 7.9 When intangibles are included, the rate of return becomes a monotonous function of the investment duration. Figure 7.10 Business turnover cannot be studied with conventional economical theories. Figure 7.11 If regular light bulbs were lousy replacements for sunlight, the florescent light is scandalous – the true shock and awe approach, at your expense (from Islam et al., 2010). Figure 7.12 By converting sunlight into artificial light, we are creating spontaneous havoc that continues to spiral down as time progresses. Imagine trying to build a whole new “science” trying to render this spiral-down mode “sustainable”. Figure 7.13 In the current technology development mode, cost goes up as overall goodness of a product declines.

Figure 7.14 Increasing threshold investment eliminates competition – the essence of free market economy and economic growth. Figure 7.15 Because of the “stupidity, squared” mode, technology development in the west continues at the expense of technology dependence in the east. In this, the developing countries are ignorant because they think that this technological dependence is actually good for them and developed countries are ignorant because they think one can exploit others at the level of obscenity and get away with it in the long-term. Are you better off today than you were 4000 years ago? You don’t have to consult Moses to find an answer. Figure 7.16 As a result of the over-extension of credit and subsequent manipulation (by the creditors: Paris Club etc.) of the increasingly desperate condition of those placed in their debt, nostrums about “development” remain a chimaera and cruel illusion in the lives of literally billions of people in many parts of Africa, Asia and Latin America. (Here the curves are developed from the year 1960.) Figure 7.17 Pathways destructive of intangible social relations (cf. Figure 1) Figure 7.18 Is it the total population that makes the economy plummet, or rather the growth in the corrupt portion that one should worry about? Figure 7.19 The role of interest rate and the operating principles around the world. Figure 7.20 The role of interest rate in driving economic decline. Chapter 8 Figure 8.1 Crude oil formation pathway (After Chhetri and Islam, 2008). Figure 8.2 General activities in oil refining (Chhetri and Islam, 2007b). Figure 8.3 Pathway of oil refining process (After Chhetri et al., 2007). Figure 8.4 Natural gas “well to wheel” pathway. Figure 8.5 Natural gas processing methods (Redrawn from Chhetri and Islam, 2006b). Figure 8.6 Ethylene Glycol Oxidation Pathway in Alkaline Solution (After Matsuoka et al., 2005). Figure 8.7 Schematic showing the position of current technological practices related to natural practices. Figure 8.8 Different phases of petroleum operations which are seismic, drilling, production, transportation & processing and decommissioning, and their associated wastes generation and energy consumption (Khan and Islam, 2006a). Figure 8.9 Schematic of wave length and energy level of photon (From Islam et al., 2010). Figure 8.10 Breakdown of the no-flaring method (Bjorndalen et al., 2005).

Figure 8.11 Supply chain of petroleum operations (Khan and Islam, 2006a). Figure 8.12 Water vapor absorption by Nova Scotia clay (Chhetri and Islam, 2008). Figure 8.13 Decrease of pH with time due to sulfur absorption in de-ionized water (Chhetri and Islam, 2008). Figure 8.14 Schematic of sawdust fuelled electricity generator. Figure 8.15 Water-fire yin yang, showing how without one the other is meaningless. Figure 8.16 The sun, earth, and moon all are moving at a characteristic speed in infinite directions. Figure 8.17 Orbital speed vs size (not to scale) (From Islam, 2014). Figure 8.18 Natural light pathway. Figure 8.19 Wavelength spectrum of sunlight (From Islam et al., 2015). Figure 8.20 Colors and wave lengths of visible light. Figure 8.21 Artificial and natural lights affect natural material differently. Figure 8.22 Wavelength spectrum of visible part of sunlight. Figure 8.23 Visible natural colors as a function of various wavelengths and intensity of sunlight. Figure 8.24 Wavelength and radiance for forest fire, grass and warm ground (From Li et al., 2005). Figure 8.25 Blue flame radiance for butane (From Islam, 2014). Figure 8.26 Artificial light spectrum (From Islam, 2014). Figure 8.27 Comparison of various artificial light sources with sunlight. Figure 8.28 Comparing within the visible light zone will enable one to rank various artificial light sources (From Islam et al., 2010). Figure 8.29 Formation of a shield with dark and clear lenses (From Islam et al., 2010). Figure 8.30 Benefit to environment depends entirely on the organic nature of energy and mass. Figure 8.31 Oxygen cycle in nature involving the earth (From Islam, 2014). Figure 8.32 Hydrogen cycle in nature involving the earth. Figure 8.33 Water cycle, involving energy and mass. Figure 8.34 Whole rock Rb-Sr isochron diagram, basement samples (From Islam et al., 2018). Figure 8.35 Natural processing time differs for different types of oils.

Figure 8.36 Natural processing enhances intrinsic values of natural products. Figure 8.37 The volume of petroleum resources increases as one moves from conventional to unconventional (From Islam, 2014). Figure 8.38 Cost of production increases as efficiency, environmental benefits and real value of crude oil declines (modified from Islam et al., 2010). Figure 8.39 Overall refining efficiency for various crude oils (modified from Han et al., 2015). Figure 8.40 Crude API gravity and heavy product yield of the studied US and EU refineries (The yield of heavy products, such as residual fuel oil, pet coke, asphalt, slurry oil and reduced crude, is calculated as a share of all energy products by energy value) (from Han et al., 2015). Figure 8.41 Current estimate of conventional and unconventional gas reserve (From Islam, 2014). Figure 8.42 Abundance of natural resources as a function of time. Figure 8.43 Water plays a more significant role in material production than previously anticipated (from Islam, 2014). Figure 8.44 As natural processing time increases so does reserve of natural resources (from Chhetri and Islam, 2008). Figure 8.45 Production/reserve ratio for various countries. Figure 8.46 Crude oil production continues to rise overall (From EIA, 2017). Figure 8.47 U.S. reserve variation in recent history (From Islam, 2014). Figure 8.48 Technically recoverable oil and gas reserve in the U.S.A. (From Islam, 2014). Figure 8.49 Sulfur content of the U.S.A. crude over the last few decades (From Islam, 2014). Figure 8.50 Declining API gravity of USA crude oil. Figure 8.51 Worldwide crude oil quality (From Islam, 2014). Figure 8.52 The three phases of conventional reserve. Figure 8.53 Unconventional reserve growth can be given a boost with scientific characterization. Figure 8.54 Profitability grows continuously with time when zero-waste oil recovery scheme is introduced. Chapter 9 Figure 9.1 Population growth history and projection. (Source: data from CIA Fact

Book). Figure 9.2 Population growth rate over the years 1760–2100. Figure 9.3 World population growth for different continents (data from UN DESA/Populations Division, 2015). Figure 9.4 Trends in population growth depending on the state of the economy. Data from http://esa.un.org/unpd/wpp/. Figure 9.5 Wars of different variety. Figure 9.6 Post-Colonial War Model & Hierarchy Under Gold and Silver Currency. Figure 9.7 A depiction of today’s banking system is nothing but a spurious moneymaking scheme. Figure 9.8 Schematic of a zero-waste energy and mass consumption scheme. Figure 9.9 True sustainability cannot be determined with a short-term analysis. Chapter 10 Figure 10.1 HSSAN degradation has been ubiquitous in modern era. Figure 10.2 It is not enough to arrest the degradation; the trend has to be reverted. Figure 10.3 Current economic policies act like a cancer to the social economic health. Figure 10.4 For overall economic welfare, each financial crisis has to be dealt with natural remedies that are well intentioned and far away from the greed and fear cycle. Figure 10.5 Budgetary and regulatory growth in US Government. Figure 10.6 Economy under a benevolent government that imposes zero waste technology with zero interest rate with gold as the standard.

List of Tables Chapter 2 Table 2.1 Fundamental premises of Aristotle in relation to economic theories. Table 2.2 Transition money from the gold standard. Table 2.3 Gold reserve held by various countries (data from Holmes, 2016). Table 2.4 Top gold holding countries. Table 2.5 Gold reserve and Recent production (Data from USGS, 2018). Table 2.6 Mine production and reserve of various countries. Table 2.7 Distinction between gold and silver. Table 2.8 Comparison of various traits of gold and oil.

Chapter 3 Table 3.1 Spending in various households (from Roth, 2017). Table 3.3 Occurrence of Autism (data from CDC, 2017). Table 3.9 The HSS®A® pathway and its outcome in various disciplines. Table 3.1 Inflation rates during last. Chapter 4 Table 4.1 Typical features of natural processes, as compared to the claims of artificial processes (From Khan and Islam, 2016). Table 4.2 True difference between sustainable and unsustainable processes (Reproduced from Khan and Islam, 2012). Table 4.3 Features of external entity (from Islam, 2014). Table 4.4 How natural features are violated in the first premise of various ‘laws’and theories of the science of tangibles (Islam et al., 2014). Table 4.5 Transitions from natural to processed. Table 4.6 Sugar consumption for various regions/countries (from Islam et al., 2015). Table 4.7 Commodity price over last few decades (from Islam et al., 2015). Table 4.8 Prices of various artificial sweeteners (From Islam et al., 2015). Table 4.10 Global exports of saccharin (from USITC publication, http://www.usitc.gov/publications/701_731/pub4077.pdf). Table 4.11 Synthesized and natural pathways of organic compounds as energy sources, ranked and compared according to selected criteria. Chapter 5 Table 5.1 Ranking of various countries on oil dependence and Leagum prosperity index. Table 5.2 Per capita energy consumption (in TOE) for certain countries. Table 5.3 US crude oil and natural gas reserve (Million barrels). Table 5.4 The tangible and intangible nature of yin and yang (from Islam, 2014). Table 5.5 Characteristic frequency of “natural” objects (from Islam, 2014). Chapter 7 Table 7.1 Some “breakthrough” technologies (From Khan and Islam, 2016). Table 7.2 The transition from natural to artificial commodities, and the reasons behind their transition.

Table 7.3 The HSS®A® pathway and its outcome in various disciplines. Table 7.4 Natural processes vs. Engineered processes. Table 7.5 The HSS®A® pathway in energy management schemes. Table 7.6 Overview of Petroleum Refining Processes (U.S. Department of Labour, n.d.). Chapter 8 Table 8.1 Emission from a Refinery (Environmental Defense, 2005). Table 8.2 Primary wastes from oil refinery (Environmental Defense, 2005). Table 8.3 Wave length and quantum energy levels of different radiation sources (From Islam et al., 2015). Table 8.1 The tangible and intangible nature of yin and yang (From Islam, 2014). Table 8.2 Characteristic frequency of “natural” objects (From Islam, 2014). Table 8.3 Sun composition (Chaisson and McMillan, 1997). Table 8.4 Wavelengths of various visible colors (From Islam, 2014). Table 8.5 Wavelengths of known waves (From Islam et al., 2015). Table 8.6 Artificial sources of various waves (from Islam et al., 2016). Table 8.7 Various elements in earth crust and lithosphere (From Islam, 2014). Table 8.8 Table of Elements in the Human Body by Mass (from Emsley, 1998). Table 8.9 Published isotopic mineral ages for Precambrian basement in southwestern Ontario, Michigan, and Ohio (From Islam et al., 2018). Table 8.10 Summary of Proven Reserve Data as of (Dec) 2016. (From Islam et al., 2018) Chapter 9 Table 9.1 Cost of gold production for China Gold Intl. Resources.

Scrivener Publishing 100 Cummings Center, Suite 541J Beverly, MA 01915-6106 Publishers at Scrivener Martin Scrivener ([email protected]) Phillip Carmical ([email protected])

Economics of Sustainable Energy Jaan S. Islam M.R. Islam Meltem Islam M.A.H. Mughal

This edition first published 2018 by John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA and Scrivener Publishing LLC, 100 Cummings Center, Suite 541J, Beverly, MA 01915, USA © 2018 Scrivener Publishing LLC For more information about Scrivener publications please visit www.scrivenerpublishing.com. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions. Wiley Global Headquarters 111 River Street, Hoboken, NJ 07030, USA For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com. Limit of Liability/Disclaimer of Warranty While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials, or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Library of Congress Cataloging-in-Publication Data ISBN 978-1-11952-5-929

Dedication “We dedicate this book to Elif Hamida Islam, whose thirst for knowledge and passion for true perfection have been an inspiration to us” Jaan Islam, M. Rafiq Islam, and Meltem Islam “I am dedicating this book to my children: Ibrahim, Sarah and Javaria. Their affinity for TRUTH and kindness to me is the coolness of my heart.” – A.H. Mughal

Preface The public confidence in the political, financial, and corporate media establishment is at its nadir. Economics – a subject that is uniquely concerned with optimum distribution of wealth in the society-has become a laughing stock in the face of unprecedented accumulation of wealth among the richest 1% and the spectacular failure of the Establishment to arrest the free-fall of social justice. Yet, the “left” cannot think of anything more than more taxation whereas the “right” cannot think of anything more than more tax breaks for the rich. The scientific community is equally clueless. What Nobel Laureate Chemist Robert Curl characterized as “technological disaster” is only supported by Nobel laureate economist, Robert J. Shiller, as a “failure of the economic profession in contributing anything significant to society”. The discipline of economics is already infamous for having the most number of paradoxes, but what could beat the paradox of the US economy growing “stronger” proportional to the national debt, that stands at a record high? This book offers hope and guides the readership to developing full understanding of the root causes of the current global crisis. It then shows how the spiralling down can be reversed and true sustainability restored. Every year, as soon as the Oxfam report on global economic inequality reminds us about the direction our civilization is heading, there is a hysterical reaction, but hysteria dies down within weeks and we go back to the lifestyle that brought us here today. Often the blame is laid on the millennial generation for their “apathy”, “lust for comfort” and “bratty” attitude. Yet, business insider surveys indicate it’s the same millennial generation overwhelmingly cares for the state of the world and the direction that our civilization is heading. Nearly 50% of them ranked climate change and destruction of nature as their primary concern. This is followed by concern for war and global conflict, and then global economic inequality. The vast majority of those surveyed are willing and eager to make lifestyle changes. This book breaks open the hypocrisy of our civilization and stops the blame game at its tracks and identifies the root causes of today’s world economy, ecology, and global politics. Because economics is the driver of today’s civilization, the book starts with the delinearized history of economics, covering the entire span ranging from the ancient Greeks to the Information Age. Step by step all pieces of disinformation are exposed, making it clear to see the root causes of the spiralling down mode in global economy. In this, the top 10 economists of modern era (selected from both “right” and “left”) are deconstructed and their “mistakes” identified. The book shows that these “mistakes” are embedded in every economic policy that has driven modern economy. As part of this policy, climate change crisis, wars and conflicts, and overall economic extremism are but symptoms that lie in the core of modern civilization. Just as the economics is the spiritual driver of our civilization, technology development is the mechanical driver of our civilization. This book deconstructs the technology development mode that has emerged from Newtonian mechanics and blossomed during the “plastic era” for over a century. Root causes of unsustainability of this technology development mode are exposed, laying the foundation for developing sustainable technology, with sustainable energy

management as the prototype. The book makes it clear that changes in economic policies are a prerequisite to changes in energy management and technology development. Only then can one begin to talk about reversing the global spiralling down of economic welfare and the state of the environment. The book demonstrates that changes in lifestyle are necessary but not sufficient. No economic policy or technology development mode has a chance to survive, let alone thrive unless supported by the political establishment. In this process, the government plays a pivotal role. The challenge is to change the attitude of the government from a “self-serving” controlling mode to a representative philanthropic mode. This new system of economic development and political governance is inspired by a long-forgotten understanding of political economics: medieval Islamic economics. In reviewing the history of economics from trade, currencies, and interest, the strengths and weaknesses of various economic developments over our centuries are evaluated. Based on the historical analysis, a step-by-step procedure is outlined for this fundamental change in our society today. As a whole, this book is the first of modern era to offer such a comprehensive analysis, complete with solutions to the entire crisis of today’s civilization. Jaan S. Islam M. R. Islam Meltem Islam M.A.H. Mughal

Chapter 1 Introduction 1.1 Opening Remarks Nobel laureate economist, Robert J. Shiller, famously pointed out: “Since the global financial crisis and recession of 2007–2009, criticism of the economics profession has intensified. The failure of all but a few professional economists to forecast the episode – the aftereffects of which still linger – has led many to question whether the economics profession contributes anything significant to society.” There is no denying that there is a sentiment in today’s world that the economists with all their economic theories have largely disappointed the general public. While energy is the driver of modern civilization, the role played by the economy in the formation of modern culture is pivotal. An energy manager has to be redeemed by the economic appeal of the project, despite more recent concern about society and the environment. In this regard, the biggest challenge of a new book on this is to present an answer to the question that Shiller’s comment poses. Of course, the answer is not trivial and it is no exaggeration to state that the answer has eluded all researchers and pundits of modern era. Recent studies reveal another depressing fact. Recent findings of Islam (2017) reveal that liberal economies grow faster (for instance, Poland) than the other (for instance, Bulgaria) but also create inequality and can lead to political failure in the long-run. This finding resonates depressingly when one tries to explain the current state of the economy of the United States in particular and the world in general. Recently, Oxfam reported that 388 billionaires had the same amount of money as the bottom 50% of the Earth’s population in 2010. The charity’s report also said that the richest one percent of the population would own more than half the world’s wealth by 2016. Indeed, the Oxfam report turned out to be ‘prophetic’ as the number whose wealth is equal to that of the poorest half of the world’s population has declined steadily since 2010. Of course, the discipline of economics has long been disconnected from social justice, but how can the irony of the original definition of economy (the Greek origins, meaning οίκος – “household” and νέμoμαι – «manage») be lost on us? Only recently, Facebook lost $100 billion in 10 days, yet there has been no earthquake, flood, or war (Financial Post, 2018). Bitcoin has oscillated between values assessed between $1000 and $13000 within days, without any connection to real wealth (Shane, 2017). Nearly 100 million dollars worth of Bitcoin has been stolen without physically removing any real property (The Guardian, 2017). What we have seen is Information turned into a safe haven of fraud and economic disaster. In the mean time, the “leader of the free world admits” that there is a steep learning curve await the U.S. presidency. However, as usual, ‘the state of the union remains strong’ – the time of global deception has arrived. Occasionally, facts are spewed out in terms of statistics, national debt, etc. However, at no time were scientific explanations provided as to what these data stand for. For instance, the

U.S. debt is reported to be $21 trillion, almost all in Treasury Bonds and Bills. Excluding some $7 trillion held by states and American citizens, $14 billion is in the hands of holders outside the U.S. Just over half of that, $7 billion is held by the Chinese (Wattles and Mullen, 2018). While listing these economic data, all news outlets ignore to point out none of these ‘debts’, ‘investments’, ‘securities’ have any meaning in Information age, for which the exceptionalism is the rule. Tomorrow, the U.S.A. can decide that access to U.S. satellite would require a certain tariff or any other hysteria and all of a sudden all deals will be off. Instead, it is stated that if the Chinese were to begin bleeding off some of that into the financial markets (to recover the costs of the tariffs), the value could begin to go down, quickening with any acceleration on the selling pressure. If that paper began declining rapidly, it could take the value of the U.S. dollar down with it. Overall, there is a great deal of frustration and people do not feel they are a part of the economic system. The general public has lost confidence in the government and people feel helpless as everyone agrees that there is a rush toward economic inequality and the current economic development scheme cannot be sustained. The public mood was captured by Americn writer, Gore Vidal (1925–2012), who wrote, “The genius of our ruling class is that it has kept a majority of the people from ever questioning the inequity of a system where most people drudge along paying heavy taxes for which they get nothing in return.” In the mean time, there is no shortage of alarmists that continue to put out notices that perpetrate further fear and the public mood spirals down to a level of collective depression. For instance, only recently, former Greek Finance minister and economics professor, Yanis Varoufakis warned that Karl Marx ‘will have his revenge’ as capitalism is coming to an end because it is making itself obsolete (Embury-Dennis, 2017). The former economics professor correctly described how companies such as Google and Facebook, for the first time ever, are having their capital bought and produced by consumers. Based on that observation, Varoufakis concluded something that has no connection to the science of economics. What we have here is a conclusion that appeals to everyone but it has no scientific basis nor does it give any solution. The energy management sector is equally hopeless. There are numerous studies and even more voluminous conclusions but all point to the same main conclusion: we agree to disagree and life must go on. After spending billions of dollars in research, scientists could not even agree on if we actually have global warming, let alone the cause of it. Considering the fact that today’s science is only a degenerated form of millennia old dogma, it is no surprise that we cannot explain why sunlight is different from fluorescent light, or microwave is different from open flame, or paraffin was is different from beeswax, or carbon dioxide from trees is different from the one guzzled out of engine exhausts. If we cannot even explain these fundamental facts, how could today’s engineering that starts off with denaturing then calculates everything based on New science can begin to offer a solution to what Nobel Chemistry Laureate Robert Curl characterized a “technological disaster.” Every day, headlines appear that show, as a human race, we have never had to deal with more unsustainable technologies. How can this spiralling down path be changed? Who could possibly come to rescue? The government? In reality, it has long been recognized that politicians are the most toxic of liars. A New York Times (May 11, 2015) headline reads: All Politicians Lie. Some Lie More Than

Others. When policy makers lie, there appears to be no recourse of escape from a process that has been severely corrupted. British poet, William Shenstone (1714–1763) famously pointed out, “A liar begins with making falsehood appear like truth, and ends with making truth itself appear like falsehood.” What we have is ubiquitous presence of falsehood in every sector. Money being the driver and energy being the vehicle of this civilization, the falsehood that prevail in economic and energy sectors are the most extreme and it alone can cause the economic extremism that the world is experiencing today. This book exposes the falsehood of the past with objective scientific analysis for both economics and technology development. It then shows how these falsehoods have been hidden behind hidden hands in order to incapacitate the world from discovering the scheme that has created the current culture of obscene economic inequality.

1.2 Research Questions Asked In this book a number of key research questions are asked. Every chapter answers at least one of these questions. This format has been particularly useful since our groundbreaking book on the science of global warming and energy management (Islam et al., 2010). It is important to understand the nature of these questions before embarking into reading the main text of the book. It is equally important to avoid trivializing these questions, let alone forming an answer to these research questions. These are carefully selected questions that are addressed in various chapters with in-depth research and forming an opinion prior to reading the text would create a mental block and obfuscate natural cognition.

1.2.1 What is the Natural State of an Economy? It is often said that New science is full with paradoxes. In fact, a culture that is obsessed with tangibles and myopic vision, more knowledge amounts to more uncertainties, thereby creating the paradox of ‘increasing knowledge’, thus redeeming the Orwellian ‘ignorance is bliss’ mantra. It is so deeply rooted that the ‘increasing knowledge paradox’ has become a topic of psychology (Schreiner, 2018). Those looking for an answer are then directed to productivity paradox, which apparently can be solved with productivity, arguably without creating the increasing knowledge paradox (Kijek and Kijek, 2018). This would be comical if not pathetically true that says volumes about modern-day cognition. We sought to solve the causes of all paradoxes, by seeking out the answer to this question: What is true? It turned to be a pivotal question that needed over 70 pages of to answer. That document, in the form of a white paper, became a part of every book that we have written since 2006, the time we sought the answer to this question. The usefulness of that paper was that it revealed the source of any paradox and if ‘what is true’ is properly defined with 100% logic, no paradox can arise from subsequent cognition. When it comes to economics, we pose this research question: What is the natural state of an economy? This is because if this natural state is scientifically defined, there would be no paradox. It is extremely important to come to grips with the reality that economists never defined what is natural before going off to define every natural state involving economy. It is no surprise that Economics is the one discipline that has the most

number of paradoxes. One notable one is the Arrow information paradox, named after named after Nobel Laureate economists, Kenneth Arrow. Information being the prelude to knowledge, what we have effectively created is a double paradox and every information we have has become a recipe of the implosive top-down model, called the ‘aphenomenal’ model by Zatzman and Islam (Figure 1.1). It is established in this book that modern economics models are all aphenomenal and belong to Figure 1.1b, in which decisions are made based on self interest, profit margin, and myopic quarterly gains rather than going through the abstraction process (Figure 1.1a). In order to answer the research question posed, all current models are deconstructed and their spurious nature unraveled. It is after that the scientifically correct definition of natural state is shown and the truly natural state of economy is presented. The readership is thus prepared to see all energy policies and see past such paradoxes as Jevon’s paradox to have a clear picture of what economics of sustainable energy should entail. Answer to this research question is given in Chapter 2 and part of Chapter 3. In Chapter 2, a delinearized history of money/wealth is provided in order to comprehend fundamental traits of natural state. Chapter 3 then shows how modern economic theories are incapable of tracking any natural state of economy, thereby remaining irrelevant in studying overall sustainability of any economic development project.

Figure 1.1 Knowledge model vs. aphenomenal model.

1.2.2 What is the Current State of Economy? American journalist and author, Chris Hedges said, “We now live in a nation where doctors destroy health, lawyers destroy justice, universities destroy knowledge, governments destroy freedom, the press destroys information, religion destroys morals, and our banks destroy the economy.” We know for fact that there is very little confidence of general public in respective

governments, who merely represent the same financial elites that control the media and the scientific community. Today, in thw U.S.A., Congress’ approval rate runs under 20% and often as long as 13%. It is also worth noting that the approval rate skyrocketed to over 80% only during the post 9/11 hype of war on terrorism, the same time the entire nation fell in unprecedented debt (Figure 1.2.). The question really becomes, what is really the true state of the economic status of the USA and the world that is led by USA.

Figure 1.2 Congressional approval rate for last 32 years (Gallup data). The state of the economy of the modern world is depicted in Chapter 2 with a focus on historical degradation of the original meaning of the word ‘economics’ that once reflected real value of wealth and socio/environmental status of a community. We discover not only the current status of the economy but also how we got there. Chapter 3 presents the mindset of this socio/political degeneration or the philosophy that drove the world to such a state of obscene inequality in all sectors of the society, including the energy sector.

1.2.3 What are the Reasons Behind the Current State of Economy? We live in a society that does not ask the ‘why’ questions. Lawyers obey the cardinal rule that a witness should not be asked any ‘why’ question, which might trigger a response that would be hard to control or give a spin toward desirable outcome. As Figure 1.1 indicated, pathological affinity toward a desired outcome is motivated by the fact that a decision has already been made prior to finding facts, let alone establishing the pathway of the truth. It is the same in the health industry: there is no shortage of medicines and vaccines for incurable diseases whereas there is no mention of the root causes of the diseases much less the path to cure the disease. It is no surprise that there is no cure for practically any disease as the world has settled in for ‘managing’ rather than curing an ailment (Islam et al., 2015; Islam et al., 2016; Islam et al., 2017). Nothing manifests the crisis better than culture of cancer in medical science. There are numerous studies are published every week that all suggest often-conflicting recommendations but always settling in for more studies and more research, none of which addresses the real cause of cancer. Similarly, every week numerous studies are published about economics and policies all offering some sort of cure for the social ailment that has gripped the world but none offering the cause of such ailment.

This research question is pivotal yet amazingly simple. As we begin to answer this question, we begin to discover the root cause of this social ‘cancer’ that would require a surgery in the short-term but more importantly must undergo a fundamental lifestyle change in order to change the course of global economic health. This question is addressed in Chapters 2 and 3, and followed up in Chapter 4 that also discusses overall state of the technology development. Then Chapter 5 is presented in order to link the answer to energy sustainability. Such prolonged discussion was necessary in order to offer the readership with a complete diagnosis of the social ‘cancer’ that has made the economic health untenable.

1.2.4 What is the Current State of Technology Development? Economics is the driver of today’s technology development. So, after answering the economics questions, the natural question raised is about technology development. In answering this question, the starting point can be: What is the status of New science? Thankfully, this topic has been addressed elsewhere (e.g. Khan and Islam, 2016). Because technology development is entire based on New science, it becomes a review of the current practices. That review makes it clear how today’s technology development should not have any other consequence than a ‘technological disaster’ that only fuels economic implosion in a ‘spiraling down’ mode of thinking. Chapter 4 addresses this research question. In answering that question, it becomes clear what needs to be done in order to render these technologies sustainable.

1.2.5 What is the Current Status of Energy Management? In today’s culture of fear and greed, in which every fear is perpetrated in order to scheme off the scared population, there are many tactics that are in place. The most popular one is that we are in every war because it is about oil. Then, the scientific community, which is also another sellout to the grand scheme, rings another warning bell, we are soon to run out of oil, and there must be another resource put in place for an extra cost. The so-called peak oil theories pop out from all corners. It is the same hype that was concocted in last century about the world running out of coal. Then comes another round of apoplectic messiahs that warn us about global warming and vilifying carbon – of all things – as the enemy of life on earth. Before anyone can get out of gasping, the economists come out and ring another warning bell – all these have to be remedied for a fee and we simply do not have enough to go around. Chapter 4 deconstructs modern-day hypes and propaganda and calls out the fraud that once Enron personified by calling itself the ‘most innovative energy management company’. This chapter puts the final nail on the coffin of disinformation that has gripped the entire world for decades.

1.2.6 Are Natural Resources Finite and Human Needs Infinite? The fear mongering for increasing quarterly corporate profit has a long history. It started when the Roman Catholic Church terrorized ordinary people about their propensity to commit sin all because they were humans that are born with ‘original sin’. New science has merely changed

the ‘sacred’ designation of this status and sold itself as secular while promoting the same notion of ‘selfish man’ whose only mission is to maximize pleasure and minimize pain. Scientists have been so fixated on this notion that they did not bother considering any other premise, let alone a correct and consistent premise. It was never challenged and thereby the conclusion always became, humans are so greedy that no matter what they will be needing even more. After all, greed has no limit. Once this is established, then the entire humanity is perceived as being engaged in a hunger game. No farther investigation can be allowed and all focus now has to concentrate on how to survive this hunger game. By answering this research question, the book clarifies what is needed for sustainability and how to progress as a global community. This is a convoluted topic and as such needs to be addressed in multiple forums. As such, Chapter 4, 5, and 9 discusses the response to this research question. Chapter 9, which chalks out a clear path for positive government intervention, also addresses this question because the answer to this question pertains to policy changes that have profound implications on sustainability.

1.2.7 How Would a Model Economic System Look Like? The history of the modern era vis a vis the general welfare of the society is bleak. Starting with philosophers and followed up by every scientist and economist, the modern era has promoted only negative side of humanity. It is as if no one could escape from the ‘original sin’ curse. What we have in modern history is both the political left and right basing their arguments with the same false fundamental premise. The goal here is not to demonize conventional theories and approaches, but to point out that the theoretical flaws in his model make it impossible to predict reality in economics, let alone model economics. While Chapter 2 has deconstructed all these theories and paradoxes, the question still remains as to what is the ideal in an economic system? We know from Chapter 2 Arisitotle had many of the ideals right and those ideals were later corrupted for various reasons, but we still do not have a model that captures all aspects of economic standardization. Unfortunately, the current theories (both left and right) do not stop at predicting chaos or anarchy, it goes further and asserts that the chaos/anarchy model is the only model and there cannot be an alternative model. Zatzman and Islam (2007, p. 56) attributed this philosophy to Baroness Thatcher’s infamous statement, “There Is No Alternative”, calling it the TINA syndrome. This mindset has prevented scientists, philosophers and economists to even look anywhere. As pointed out by Jaan Islam (2018), there was an entirely different approach taken by Islamic philosophers, many of whom have been recognized by modern-day philosophers and scientists as the father of respective disciplines. Ibn Khaldun, who is recognized as the father of sociology, is one of them. Unlike any of modern day scholars, he made systematic observations of historical events and divided civilization in two categories – one is the Caliphate (that belonged to prophet Muhammad and his rightly guided caliphs) and the other is the Kingdom (or empire). Modern scholars characteristically have praised Khaldun’s Kingdom model and drew analogy with modern day events, including democratic reform while completely ignoring Khaldun’s Caliphate model. While attempting to answer the research question of this section, we brought back Khaldun Caliphate model and examined it vis a vis

all the shortcomings of modern economic systems. Chapter 6 presents our findings and thus makes room for creating a truly sustainable economic system than can function today. It describes a model economy, complete with model financial structure and model governance.

1.2.8 Is Sustainable Petroleum Technology Possible? For the longest time, ‘sustainable petroleum technology’ was considered to be an oxymoron. Even when we published our book titled Greening of Petroleum Operations, an 800+ page compendium in 2010, it was a concept many struggled to grasp. After all, we were told that carbon is the enemy (despite carbon being the essence of life) and no one was ready to partake in an intellectual endeavor that challenges core beliefs. However, ever since, we have published a series of books and now it is beginning to surface what was known from the beginning of civilization that is petroleum resources are natural and as such there is no reason for them to be unsustainable unless we have created a mess during the processing cycle. So, the problem is reduced to developing processing technologies that are sustainable. Chapters 7 discusses how it is not only possible but in fact necessary to render processing of petroleum technologies to be sustainable. As such, petroleum resources become infinitely sustainable. This implies two consequences: (1) there is no limit on availability of petroleum resources; (2) growth of the human population will not become a factor, both having very positive effect on global economy. In addition, energy and material characterization shows how energy sources should be valued and energy pricing made equitable. Petroleum therefore becomes an integral part of the natural cycle as envisioned in Figure 1.3.

Figure 1.3 Nature is inherently sustainable (From Khan and Islam, 2007).

1.2.9 What Role the Government Should Play

Criticism of the government has reached mainstream political discourse in the U.S. and other countries, with the rise of the likes of Bernie Sanders and Jesse Ventura calling out government ‘corruption’. However, with the rise of Donald Trump to presidency, cynicism of the government has reached feverish pitch. There is no shortage of quotations regarding government failing but this is not unexpected for a culture that has seen the government as an active colluder with financial establishment that everyone loves to hate and with the corporate media that everyone sees as the propaganda arm of the financial establishment. However, the following quote of former Minnesota Governor, Jesse Ventura sticks out as the most relevant in the context of this book. He said: Government’s role should be only to keep the playing field level, and to work hand in hand with business on issues such as employment. But beyond this, to as great an extent as possible, it should get the hell out of the way. Incidentally this was the role that was envisioned by Ibn Khaldun when he presented his model government that intervened only to correct injustices. In this process, a government must play a role of a benevolent arbitrator who is charged with striking a balance between nature and humans, between sellers and consumers, between the state and the external world. It also means, the government will help reversing the culture that historically benefited from war, denaturing of chemicals, and in general creating a ‘technological disaster’ in the past. Chapter 7 discusses the role of the government that is part of the overall sustainability of economics of sustainable energy. This is the only model that assures total sustainability as it fulfills are requirements of sustainability involving 1) Environment, 2) economics, 3) politics, and 4) culture (James et al., 2015).

Chapter 2 Delinearized History of Economics and Energy 2.1 Introduction The history of economic thought has two distinct tracts. The so-called ‘Eurocentric’ tract deals with different thinkers and theories that considered monetary benefits as the driver of human society. This mindset emerges from a humanity model that is based on ‘original sin’ and various offshoots of the same concept, such as inherent selfishness, often packaged as a part of the ‘survival of the fittest’ doctrine (Islam et al., 2017). The other tract, often dubbed as the ‘Islamic’ or oriental one is akin to a holistic approach to social welfare (Islam, 2018). The Arabic word for economics is, iqtisād, which is derived from the root word, qsd, meaning dynamic intention. As such, Zatzman and Islam (2007) used the phrase ‘economics of intention’ in contrast with economics of profiteering in order to describe the Islamic tract of economics.

2.2 The European Tract of Economics History In the Western world, economics was not a separate discipline, but part of philosophy until the 18th–19th century Industrial Revolution and the 19th century Great Divergence, which accelerated economic growth. Long before that, from the Renaissance at least, economics as an intellectual discipline or science was dominated by Western thinkers and their academic institutions, schooling economists from outside the West, although there are isolated instances in other societies. The European tract of economic thoughts is claimed to have emerged from Ancient Greek philosophers, such as Aristotle, who allegedly examined ideas about the art of wealth acquisition, and questioned whether property is best left in private or public hands. In his work, Topics (Aristotle, 250 BC), Aristotle provides his philosophical analysis of human ends and means. He attached value to a product, based on its usefulness to people. While this is a logical premise, it is only later that the term ‘useful’ was conflated with ‘desirable’, thus asserting that whatever is desired is useful. As such, the more desirable a good is, the higher the value of the means of production is. This has been a useful axiom for both liberal and conservative philosophers of the European tradition. Conservatives find this axiom useful because it unleashes consumerism, whereas liberals like this axiom because it validates Pragmatism. Aristotle is further credited with a number of economic ideas such as the necessity of human action, the pursuit of ends by ordering and allocating scarce means, and the reality of human inequality and diversity. Once more, the conflating terms are ‘inequality’ and ‘diversity’. Similar to taking ‘light’ as opposite to ‘darkness’ and vice versa that implicitly assumes inherent symmetry and isotropy, the entire scholarship of modern Europe has interpreted diversity, which is an innate quality of nature, with inequality, which is a subjective term that has a false premise attached to it (Khan and Islam, 2016).

Another principle, also credited to Aristotle is that actions are necessarily and fundamentally singular (Younkins, 2005). From this position, various interpretations have emerged. Thomas Aquinas, followed by philosophers of Capitalist mindset have seen this statement as a reason to believe that individual human action of using wealth is what constitutes the economic dimension, and that the sole purpose of economic action is to use things that are necessary for maintaining life (i.e., survival), and for the ‘Good Life’ (i.e., flourishing). In this, ‘good’ remains undefined or at least a subjective notion (Islam et al., 2016). One notion of the ‘good life’ is the ‘moral life of virtue through which human beings attain happiness’ – this happiness eventually develops into Utilitarianism that thrives on maximizing pleasure and minimizing pain (Islam, 2018). Karl Marx’s ideal, on the other hand, hinges upon the premise that the economic value of a good or service is determined by the total amount of “socially necessary labor” required to produce it, rather than by the use or pleasure its owner gets from it. Each of these interpretations can be painted as ‘secular’ or doctrinal, depending on the fundamental premises used to interpret, but none provides for a logical explanation (Kenedy Darnoi, 2012). Aristotle repeatedly refers to the underlying objective of all actions, the objective being ‘eudaimonia’ (Greek: εύδαιμονία). This word is commonly translated as happiness or welfare; however, “human flourishing” has been proposed as a more accurate translation. Etymologically, ‘eudaimonia’ consists of the words “eu” (“good”) and “daimōn” (“spirit” or “soul”). This objective is central to the concept of morality and ethics in Aristotelian work1 that uses this objective as the impetus “aretē”, most often translated as “virtue” or “excellence”, and “phronesis”, often translated as “practical or ethical wisdom”. In Aristotle’s works, eudaimonia was (based on older Greek tradition) used as the term for the highest human good, and so it is the aim of practical philosophy, including ethics and political philosophy, to consider (and also experience) what it really is, and how it can be achieved. Furthermore, it is widely recognized that human nature has capacities pertaining to its dual material and spiritual character, and as such economics is an expression of that dual character. The economic sphere is the intersection between the corporeal and mental aspects of the human person. This fundamentally sound premise was transformed by European philosophers that turned the dual-nature theory upside down. The ‘Dual nature theory’ was concocted by both philosophers and new scientists that converged the fallacy of original sin with evolution theory of morality. The underlying false premise was humans are just another species of animals that are motivated by pleasure (eating, for instance) and avoiding pain. Furthermore, the premise that supported the ‘survival of the fittest’ was invoked, thus stating that no humans can avoid extinction if it is not motivated to reproduce. Pain and pleasure are opposite sides of the same coin. This is the selfish prime motivator of all life. The dogma of ‘original sin’ was merely regurgitated with a conclusion that is more toxic than the original falsehood with more sinister implications. As pointed out by Khan and Islam (2016) as well as Islam (2018), the Roman Catholic church’s ‘original sin’ and ‘salvation through Jesus’ were replaced in the ‘enlightenment’ era by notions of inalienable natural rights and the potentialities of reason, and universal ideals of love and compassion gave way to civic notions of freedom, equality, and citizenship. There the definition of ‘natural’ and ‘universal’ remained arbitrary, devoid of any reasoning of

logical thought. This has been the era of roller coaster ride of spiraling down of all values through a successive degradation through ever more illogical dogmas and false premises of the following philosophies: Classical liberalism Kantianism Utilitarianism Nihilism Pragmatism Theism Existentialism Absurdism Secular humanism Logical positivism Postmodernism Naturalistic pantheism In reality, however, economics has become all about the tangible aspects of humanity and intangible analysis of economics has become an oxymoron, as pointed out by Zatzman and Islam (2007) in their book titled Economics of Intangibles. While Aristotle made a distinction between practical science and speculative science, European scholars, with the prejudice of pragmatism, took ‘practical science’ and turned that into ‘knowledge for the sake of controlling reality’. At this point science of reality or truth became science of ‘controlling reality’, commonly known as disinformation with an ulterior motive. While it is true that economics ought to study relationships that are dynamic, modern European scholarship ignored the need of having those dynamic relationships a static, universal, and logical grounding. For millennia prior, such grounding was synonymous with having a standard. Such standard has been removed from all branches of new science, including economics. That is, the notion of ‘survival of the fittest’ and most earlier concepts like ‘inalienable rights’ and the ‘divine right to rule’ in Christianity (with has no basis in the Bible) contributed to the notion that human life is to be lived according to the wishes of the person: one controls their own life purpose. In terms of speculative science that was meant to encompass pure reasoning, ever since the introduction of dogmatic thinking, credited to Thomas Aquinas – the father of doctrinal philosophy, has turned into entirely illogical thinking (Islam et al., 2010). In absence of fundamentally sound/logical premise, new science is based on illogical and clearly false premises. Khan and Islam (2016) listed the fundamental premises behind each of the theories behind ‘hard science’ whereas Islam (2018) has provided one with a list of false premises behind each of these social scientific theories. It turns out that the fundamental premises of Plato and Aristotle were not illogical, but they were taken and transformed into illogical

premises by first Thomas Aquinas and later by the likes of Kant and other European philosophers. Both supporters and detractors of Capitalism have not veered off these fundamental premises, but instead built on justifying their conclusions by changing the interpretation of the premises offered by Plato and Aristotle. Aristotle explains that ontologically, the operation of the economic dimension of reality is inextricably related to the moral and political spheres. The reality has been changed into perception, based on fear and lust. In fact, reality has become subjective and it is promoted that reality is whatever one can perceive. Islam et al. (2010) rediscovered original definition of truth and reality. The following logic was used to define natural cognition or natural material or natural energy: a. there must be a true basis or source; b. the truth itself must remain non-refuted continuously over time (it must be absolute); and c. any break in its continuity or a similar exception must be supported by a true criterion or bifurcation point. The third-mentioned item in the above list sets scientific cognition apart from doctrinal or dogmatic cognition; namely, that logical discontinuities and observations in phenomena must and can be supported by other truths. Notwithstanding the longstanding general acceptance of the distinction that Thomas Aquinas is the father of doctrinal philosophy and Averroes2 (Ibn Rushd) the father of secular philosophy, our research uncovers the fact that, regardless of the claim to be operating on an entirely secular basis utterly disconnected from ‘religious bias’ of any kind, all aspects of scientific developments in modern Europe have been based on doctrinal philosophy. If the assumption that modern New science is based on non-dogmatic logic is set aside, it becomes clear that it is precisely because so many of its original premises are unreal/unprovable, unnatural or non-existent, that modern science is full of paradoxes and contradictions. Every civilization recounted in history other than post-Roman Catholic church’s Eurocentric era had a clear vision of what constitutes the truth. Plato understood it as synonymous with real that doesn’t change with time (the physical world being fleeting or a function of time is not ‘real’). Aristotle understood it as what really ‘is’. It was all fine until came Thomas Aquinas. He had access to Avicenna, Averröes, as well as Aristotle (through Muslim scholars’ translations). However, none of Thomas Aquinas’ writings contained any logical interpretation of Aristotle’s or any other scholar’s work. This trend continued throughout the New science era. Aristotle explains that practical science recognizes the inexact nature of its conclusions as a consequence of human action which arises from each person’s freedom and uniqueness. Uncertainty emanates from the nature of the world and the free human person and is a necessary aspect of economic actions that will always be in attendance. Aristotle observes that a practical science such as economics must be intimately connected to the concrete circumstances and that it is proper to begin with what is known to us. For Aristotle, the primary meaning of economics is the action of using things required for the “good life”. This definition of ‘good life’ has been transformed from long-term to short-term,3

thereby changing the meaning from broader good to self-interest. Aristotle also saw economics as a practical science and as a capacity that fosters habits that expedite the action. European philosophers disconnected this ‘action’ from conscience or intention and firmly committed to the outcome, which is then attached to outcomes that serve the best interest of the Establishment that controls the society. As such, Economics became of a tactic that aids the financial establishment in creating a perception conducive to the selection of goods that maximize profit out of self-satisfaction and redemption. The real impetus or the speculative science becomes fear and lust, as opposed to Aristotle’s original love of the real good. Figure 2.1. shows how in absence of adhering to truth criteria will increase ignorance. It turns out all theories in modern science, including Economics, start off with a false premise. In addition, all phenomenological models, while acceptable with limited scope, are susceptible to the same set of errors if analyzed with an ulterior motive. Note that the knowledge graph (attributed to Averröes or Ibn Rushd) is a monotouns graph with increasingly greater slope with time. Such shape arises from the fact that once known, the knowledge of facts cannot be unlearned. This is contrary to the ignorance graph that constantly fluctuates due to the fact that even with false first premise, it tries to weave through conclusions that at times are closure to the truth.

Figure 2.1 If truth criteria are not met, time only increases ignorance and false confidence. The definition of truth and reality was what is Absolute, External, and Unique. Dogmatic thinking turned that definition into whatever the Papal authority says, and then Pragmatism turned that definition into whatever the Establishment says, and today Utilitarianism stipulates that truth is whatever can be perceived. This perception, of course, is an inherently subjective one, and therefore truth has been tweaked from being unique to multiple. This definition has served up the Establishment an opportunity on a silver platter because the establishment has the media branch that can create disinformation and popular culture, thus creating perception that will maximize profit. Then, the financial establishment can make it easy to cash in, following by the political establishment to make sure perception of fear-driven reality is perpetual, thus continuing the economic cycle to benefit the small group that controls the existing establishment. The “Thomas Aquinas model” (Figure 2.1) has become the icon of New science. This aspect needs some elaboration. We used the term, ‘New science’ innumerable times in our books. The word ‘New science’ refers to post-Newtonian era that is marked by a focus on tangibles and short-term. In essence, new science promotes a myopic vision for which the historical

background of any event is ignored in favour of a time slot that fits the desired outcome of a scientific process. Even though not readily recognized, new science is actually rooted in Dogma science – the ones the likes of Thomas Aquinas introduced. We have demonstrated in our recent books (e.g. Islam et al., 2013, 2015, 2016, 2016a; Islam and Khan, 2012, 2016) how dogmatic notions were preserved by Newton, who claimed to have answers to all research questions. First premises of Newton were not more logical than first premises of Thomas Aquinas, yet Newton as well as post-Newton Newtonian fanatics considered Newton’s work as the standard of real science and proceeded to claim the ultimate knowledge has been achieved for everything. As we will see in latter chapters, each of Newton’s premises upon which he built ‘laws’ is spurious and inconsistent with the reality of nature, yet those premises were not challenged even by the critiques, including George Berkeley (1685–1753) who despised Newtonian mechanics. With such spurious fundamental premises, falsehood rather than knowledge of the truth was established as true knowledge. In essence, New science added only ignorance and arrogance of the so-called scientists who were no closer to scientific facts and true knowledge than the likes of Thomas Aquinas. Figure 2.2 demonstrates how the process worked in the Eurocentric culture. In this transition, Albert Einstein and his notion of Quantum mechanics added another level of hubris. Whatever couldn’t be explained by Newtonian ‘laws’ now was claimed to be crystal clear with the arrival of the concept of Quanta. Yet, none of the claims of quantum mechanics could be verified with experiments or even logical discourse. Instead, a great deal of false confidence is added in order to create an illusion that we are climbing the knowledge ladder, while in fact we only thing we are climbing is the arrogance/ignorance curve at a ever increasing faster rate. In the Politics, Aristotle views labor as a commodity that has value but does not give value. Aristotle did not formulate the labor theory of value but instead held a theory of the value of labor. Aristotle observed that labor skill is not a determinant of exchange value, but rather that the value of labor skills is given by the goods they command in the market. He maintained that value is not created solely by the expenditure of labor in the production process. Noting that labor skill is a necessary, but not a sufficient, determinant of value, he explains that both utility and labor skills are pertinent to the determination of exchange values and exchange ratios. This is the first known effort to quantify the value of labor. He says that, in the end, the basic requirement of value is utility or usefulness. Once again, it comes down to the words ‘good’ and ‘useful’, which determine the value. New science defines ‘value’ as the ability to satisfy wants. Wants are, however, a function of desire and perception. Demand is thus governed by the desirability of a good, which is equated with ‘value’. According to Aristotle, exchange value is derived from use value as communicated through market demand. This market demand, once again, is a function of public perception. That’s where mass media and government play a role.

Figure 2.2 New science only added more arrogance, blended with ignorance to Dogma science. In Book I of the Politics, Aristotle distinguishes between use value and exchange value. It was Aristotle who created the concept of value in use. The “use value” or utility of a good or service depends upon its being productive of an individual person’s good. He explains that the use value of a given article can vary among individuals and that the demand for the item is a function of its use value. Aristotle correctly observes that, as the quantity of the good possessed increases, the use value of that good will begin to decline at some threshold point. This saturation point is reached because human needs are limited due to tangible limitations. Aristotle also states that the use value of a good or service will be increased if it can be consumed conspicuously; that demand will fluctuate as the extent of the use of the item is limited or wide-ranging; and that exchange value and demand are affected by the circumstances of rarity or scarcity. In essence, Aristotle describes the nature of natural supply and demand. In addition, Aristotle distinguished between one’s possessions (i.e., final goods) and instruments (i.e., factors) of production and noted that the “need” of means to an end will vary in accordance with the “need” of the end itself. In the Topics and in the Rhetoric, he notes that the instruments of production derive their value from the instruments of action (i.e., the final products). As long as the word “need” is not conflated with “desire” or “obsession” or “addiction”, there is no inaccuracy in this statement. However, whenever need is interpreted as ‘desire-driven’, it becomes susceptible to disinformation that can actively promote obsession and addiction to any product that in itself can be deliberately contaminated to enhance ‘desire’ (Islam et al., 2016). Observing that economic goods derive their value from individual utility or usefulness, Aristotle glimpsed the role of diminishing marginal utility in price formation. He recognized that the value of something could be established by discovering what its addition to (or subtraction from) a group of commodities did to the total value of the group. In the Topics, he stated that the value of one good could be determined by evaluating the impact on the group by adding or subtracting the particular good. The more is gain by the addition of one good, the higher its value and the greater the loss from the absence of the good, the more useful the commodity is assessed to be. Various goods thus can be compared by looking at their impact of addition and subtraction. According to Aristotle, the quantity of a good reaches its saturation point when the use value plunges and becomes immaterial. In Book I of the Politics, he points

out that natural pressures of diminishing utility for goods direct remaining human energy toward moral self-improvement. The implicit assumption is humans have an inherent desire to morally self-improve. Morality is an intangible quality and as such has been rendered, as a concept with practical effects, non-existent by New science. In the European tract of economics, Aristotle is credited to have discovered, formulated, and analyzed the problem of commensurability. Commensurability is an important aspect of economics and forms the core of any trading policy. This is a difficult problem that has been trivialized in post RCC Europe. The complexity of the problem arises from that fact that any analysis has to quantify (render into tangible) intangible qualities in order to be able to rank various products according to their usefulness, which is both intangible and subjective. His challenge was to discover how diverse products can be commensurable and thus have an exchange value or price. The presence of tangible correlation makes it easier to compare goods, thus calling it a ‘strong commensurability’, it is always a complex science when it involves intangibles, including cases of future values that might fluctuate depending on numerous other factors. New science designates such goods as ‘weak commensurate’, mainly because a cardinal value cannot be assigned to the goods (O’Neill, 1993). Aristotle’s objective was to prove that every exchange of goods has to be an exchange of equivalents. Aristotle clearly stated that goods must be equalized somehow by some common measure that must be both universal and time-honored. If we are faced with two goods that are incommensurable or at least whose commensurability is not clear, there is a temptation of finding a common dimension and then compare them based on that dimension. This temptation of finding a common dimension doesn’t belong to Aristotle but has become a hallmark of New science. Khan and Islam (2012, 2016) offers a detailed discussion on how such mindset has prompted scientists to introduce numerous ranking systems that are all illogical and scientifically absurd. One particular example cited by them is the comparison of various sweeteners with honey, based on one dimension, i.e., sweetness. Aristotle’s premise was that there will be no exchange without equivalency and that there can be no equality without commensurability. This word ‘equivalency’ was conflated with ‘equality’, thus New Science interpreted Aristotle’s premise as: when people associate for the exchange of goods each must be satisfied that both utilities and costs are equalized before exchange takes place. Persons stand as equals in exchange as soon as their commodities are equalized. As such, there can be an exchange of honey with Aspartame as long as both parties agree. That agreement itself creates equality between honey and Aspartame, although one is a ‘cure to humanity’ and the other is a neurotoxin. This paradox comes from the false interpretation of Aristotle’s premise. Aristotle stated that a common standard can render any good amenable to comparison as long as the common standard, such as gold, is constant and universal. As such, Aristotle did not invoke any inconsistency and in fact made a significant contribution to economic theories of monetization. For instance, a distance measured in miles and a volume of water measured in gallons are incommensurable. However, if the distance can be reduced to a cost of transport and the water volume also reduced to cost of production, the two can be compared solely because of the fact that a common paradigm is created. Philosophically, the word ‘incommensurable’ means there is no common theoretical language that can be used to compare

them. If two scientific theories or deductive logic are incommensurable, there is no way in which one can compare them to each other in order to determine which is better. In ethics, two values (or norms, reasons, or goods) are incommensurable when they do not share a common standard of measurement. In legal terms, there can be no legal term if there is not a set of facts that are recognized as facts by both parties. Even though some authors, such as Thomas Kuhn (1962) recognized that forcing incommensurate entities to be commensurate is equivalent to a paradigm shift, which is radical shift rather than a continuous change and as such is something like a religion or cult rather than a logical discourse, few if any have considered fundamental premises of any theory, thereby advancing false premises without critical analysis. The idea that scientific paradigms are incommensurable was popularized by the philosopher and historian of science, Thomas Kuhn (1962). He wrote that when paradigms change, the world itself changes with them. According to Kuhn, the proponents of different scientific paradigms cannot make full contact with each other’s point of view because they are, as a way of speaking, living in different worlds. Kuhn gave three reasons for this inability: 1. Proponents of competing paradigms have different ideas about the importance of solving various scientific problems, and about the standards that a solution should satisfy. 2. The vocabulary and problem-solving methods that the paradigms use can be different: the proponents of competing paradigms utilize a different conceptual network. 3. The proponents of different paradigms see the world in a different way because of their scientific training and prior experience in research. Although Aristotle maintains that everything can be expressed in the universal equivalent of a standard, that standard was set to ‘money’ in the modern age of New science, which itself has gone through numerous transformations during the modern era. This standard that is supposed to be constant and universal has become subjective and dynamic. As we’ll see in latter chapters of this book, the definition of money has been manipulated in such a way that it is disconnected from real wealth and the entire economics has become artificial, devoid of proper grounding. Aristotle realized that the possibility of a measure presumes prior commensurability with respect to the dimension by which measurement is possible. He therefore sees his idea as deficient. He did not offer a remedy. However, modern philosophers ‘solved’ that problem by forcing commensurability through selection of a dimension that would lead to desired solution (Khan and Islam, 2016), which is discussed later in this volume. Aristotle next says that goods become commensurable in relation to need – the unit of value is need or demand. Need, rather than something in the innate of goods, is what makes them epistemically commensurable. Aristotle observes, however, that although need is capable of variable magnitudes, it lacks a unit of measure until a standard is introduced to provide it. Ultimately, he concludes that it may be impossible for different goods and services to be strictly commensurable. In his idea of commensurability Aristotle was the first to identify a serious and authentic problem of economics. In Book V of the Nicomachean Ethics and Book I of Politics, Aristotle distinguishes between

universal justice and particular justice. European scholarship did not consider universal justice as to playing a role in economics. In fact, morality being disconnected from a universal standard that would require the invocation of a creator, who is truly external to the humanity, bore little resemblance to the original time-honored concept of morality. Islam (2018), in his seminal work, discusses the need of restoring the concept of external and universal – i.e., logical – standard before starting any discourse on justice including justice in economic dealings. One misconception that has remained unchallenged until recently is that economic dealings are subject to the rules of particular justice, meaning one can formulate an economic system without consideration of long-term, i.e., moral consequences, i.e., as discussed above, the vindicator of the notion that pragmatics ought to drive economic [and also political] aspects of human life. This has been convenient because particular justice involves quantitative relationships that are readily amenable to mathematical, hence, tangible expressions. Such expression is necessary for linearization of any complex process. While European scholars have debated over whether Aristotle only includes distributive and corrective (i.e., rectificatory) or he also means to include commutative (i.e., reciprocal) justice under this classification in the category of particular justice, the notion that particular justice becomes arbitrary in absence of grounding with universal justice is missed by everyone. Aristotle says that distributive justice is natural justice and involves balancing shares with worth. In turn, rectificatory justice involves straightening out by removing unjust gain, restoring unjust losses, and other forms of retribution for loss and/or damages. Reciprocity involves the interchange of goods and services and does not coincide with either distributive or corrective justice. Reciprocal justice involves comparative advantages and is concerned with particularized mutual benefits derived from specialization of function. In the Nicomachean Ethics, Aristotle states that exchange depends on equality of both persons and commodities. It is in this work that he concentrates on the problem of commensurability. Aristotle used artisans as examples for his general and abstract discussions found in this work. In Book V of the Nicomachean Ethics he deals with justice – which is concerned with determining proper shares in various relationships – analyzes the subjective interactions between trading partners looking for mutual benefit from commercial transactions, develops the concept of mutual subjective utility as the basis of exchange, and develops the concept of reciprocity in accordance with proportion. In the Nicomachean Ethics, Aristotle, in his treatment of justice, applied the concepts of ratio and proportion to explain just distribution. Aristotle’s vision about justice could have triggered a paradigm shift in dealing and trading with morality-driven justice. However, European philosophers missed this work altogether. According to Aristotle, value is assigned by man and is not inherent in the goods themselves. Implicit to this argument is the fact that the standard that retains its value irrespective of its demand or usefulness. For general commodities, according to Aristotle, any exchange occurs because the needs of participants differ, each creating different subjective value. This is the essence of trading. Need plus demand is what goes into determining proportionate reciprocity in a given situation. Aristotle explains that the parties form their own estimations, bargain in the market, and make their own terms and exchange ratios. This process is entirely natural as long as need is not conflated with desire and demand is not manipulated by a certain entity that

wants to sell certain goods. In this process, the exchange ratio is simply the price of goods. As such, for Aristotle, what is voluntary is presumed just. Exchange must be mutually satisfactory. He sees mutuality as the basis for exchange and the equating of subjective utilities as the precondition of exchange. There is a range of reciprocal mutuality that brings about exchange. The actual particular price is determined by bargaining between the two parties who are equal as persons and different only with respect to their products. The problem with this, however, is that Aristotle did have a moral compass and expected humans to uphold individual moral standard and accountability, whereas today’s society does not. Today, in absence of a moral compass and in the presence of ubiquitous mind-altering advertisements, the word ‘voluntary’ has little significance. Aristotle observed that an exchange ratio is not a ratio of goods alone nor merely a ratio of people exchanging the goods involved in the transaction. Rather, it is simultaneously a ratio reflecting the interrelationships among and between all of the people and all of the goods involved in this transaction. In a society that upholds individual moral conviction to universal justice, this ratio of proportionate reciprocity would equalize both goods and persons. In philosophical terms, we can say that this book reintroduces Aristotle’s objectivity in today’s “particular economics”, in the sense that we are bringing objective matters like universal/general justice into particular matters which are usually viewed as pragmatic exchanged between people for their self-benefit. This so-called equilibrium was turned upside down, by making it the epitome of disparity between two exchanging parties. Aristotle’s position was taken to render any trade and exchange subjective, in which one party can arbitrarily maximize its own benefit exploiting vulnerability of the other party. In a small scale, the moral equivalence of this relationship is that between a Master and a slave. In any particular interaction, the slave is overwhelmingly at a disadvantage as long as the Master is not held accountable by a moral code that is external to both the Master and the slave. Even from this point on, the pyramid effect takes off. In a larger scale, similar relationship develops involving corporation and consumers, government and the people and corporate media and general public. As once pointed out by U.S. Senator Bernie Sanders, the trinity of the three establishments (the financial establishment, corporate media, and political establishment) gel together to bring down the social justice to an artificial equilibrium point, akin to implosion threshold (the lower rectangle of Figure 2.3). The financial disparity between the 1% and 99% becomes so extreme that such status cannot be sustained. Contrast that with the movement envisioned by Aristotle, as depicted in the upward moving graph. This graph also reaches saturation (upper rectangle), albeit at a homogenous economic status. The most important difference between the two graphs is the starting point. In the natural one, both parties are governed by the same moral code, whereas in the downward artificial graph, the stronger party is governed by a moral code that suits the greed of this party or there is no code at all – invoking ‘might is right’ modus operandi.

Figure 2.3 Exchange of goods can cause economic growth or collapse, depending on the starting point (solid circle: natural starting point; hollow circle: unnatural/artificial starting point). The solid circle in 2.3 represents a natural starting point, for which the transaction is based on need and external morality that both parties adhere to. The hollow circle represents the starting point for which an inherently unjust relationship defines the first transaction. The former leads to a natural equilibrium while the latter leads to artificial equilibrium, which is actually a Nadir of economic disparity and social injustice. As for the slope of the graphs in Figure 2.3, a steep slope represents acceleration or deceleration of the economic development. In this case, the number of transactions is increased within a certain timeframe the rate of economic movement goes up drastically. An example of the natural transaction is charity for food. The exchange is immediate because people in hunger would not horde the money and would spend immediately. On the other hand, in the other case, if everyone spends on useless products using credit card, the guaranteed benefit goes to the banks whereas movement of useful products stalls. For Aristotle, money is a medium of exchange that makes exchange easier by translating subjective qualitative phenomena into objective quantitative phenomena. This is a transition

between intangible (for instance, need) and tangible (for instance, food) without a the hassle of finding an equivalent through bartering. Although subjective psychological need satisfaction cannot be directly measured, the approximate extent of need satisfaction can be articulated indirectly through money. Not only does money eliminate the need for a double coincidence of needs and wants (i.e., through barter), it also supplies a convenient and acceptable expression for the exchange ratio between various goods. Money, as an intermediate measure of all things, is able to express reciprocity in accordance with a proportion and not on the basis of a precisely equal ratio. This important feature of money, however, applies to only universal standard that cannot be manipulated or whose value is not sustained over time. Only then can Aristotle’s vision of money would come to fruition. Money, as a modulating element and representation of demand, becomes a useful common terminological tool in the legal stage of the bargaining process. In the Politics, Aristotle discusses exchange, barter, retail trade, and usury. He explains that exchange takes place because of natural needs and the fact that some people have more of a good and some have less of it. He says that natural exchange redistributes goods to supply deficiencies out of surpluses. Voluntary exchange occurs between self-sufficient citizens who exchange surpluses, which they value less, for their neighbour’s surpluses, which they value more. For Aristotle, true wealth is the available stock of useful things (i.e., use values). He is concerned with having enough useful things to maintain the needs of the household and the polis (the primordial human community). He says that wealth gathering that aims at use-value is legitimate and beneficial to the society that eventually become the recipient of the accumulated wealth. Use-value or true value involves goods that are “necessary” for life and for the household or the community of the city. Aristotle considers both the household and the polis to be natural forms of association. It is not against nature when individual households mutually exchange surpluses to satisfy the natural requirement of self-sufficiency. Aristotle maintains that property must be used in a way that is compatible with its nature. Its use must benefit the owner by also being a necessary means of his acting in correspondence with his own nature. In the Politics, Aristotle distinguishes between natural and unnatural acquisition and discusses the problem of excess property. He says that the right to property is limited to what is sufficient to sustain the household and the polis life of the city, and explains that the exchange between households requires mutual judgments of equal participants in the life of the polis. The life of the household is a sound and productive means to polis life if it produces only the necessary goods and services that provide a setting for the exercise and development of the potentialities required for polis life. Aristotle emphasizes the importance of natural limits in a system of natural relationships. He says that natural exchange has a natural end when the item needed is acquired. Production is the natural process of obtaining things for life’s needs. Aristotle maintains that there is a limit to the amount of property that can be justifiably acquired as well as a limit to the ways in which it can be legitimately acquired. According to Aristotle, a household relies on exchange to supply property necessary to the

household so that the citizen can develop his humanity. It is presumed that humans are motivated by need and not greed. In other words, it is the morality that dictates economic development rather than individual lust and greed. Natural exchange operates within an environment of friendship and mutual concern to complement the basic self-sufficiency of the household. Natural exchange between households requires the exercise of ‘virtues’ and furnishes a bridge between one’s work and well-being. A wide range of material goods is needed to attain a person’s moral excellence. Economic activity is necessary to permit leisure and the material instruments necessary for a person to develop the full range of his potential and thereby flourish. Aristotle teaches that eudaimonia involves the total spectrum of moral and intellectual excellences. As discussed earlier, both ‘virtues’ and eudaimonia relate to morality that comes from a higher purpose of life. Today, the purpose of life has been reduced to ‘be happy’, ‘have fun’, ‘live to the fullest’ (translation: Maximize pleasure and minimize pain). Everything in history has been reconstructed to support this latest notion of the purpose of life (Islam et al., 2017). For instance, one can cite the example of Antisthenes, a pupil of Socrates, who is known to have outlined the themes of Cynicism, stating that the purpose of life is living a life of Virtue which agrees with Nature. Happiness depends upon being selfsufficient and master of one’s mental attitude; suffering is the consequence of false judgments of value, which cause negative emotions and a concomitant vicious character. This philosophy has no contradiction with the notion of the purpose of life being connected to the Creator, the ultimate external and universal authority. In fact, it is further stated that Cynical life rejects conventional desires for wealth, power, health, and fame, by being free of the possessions acquired in pursuing the conventional. Once again, there is no contradiction with the status of humans being ‘viceroy’, with a distinct moral purpose. However, this has been manipulated by new scientists (Khan and Islam, 2016). It is said: “as reasoning creatures, people could achieve happiness via rigorous training, by living in a way natural to human beings. The world equally belongs to everyone, so suffering is caused by false judgments of what is valuable and what is worthless per the customs and conventions of society.” Then it is described whatever comes naturally is called ‘natural’ and whatever gives one instant pleasure and quick shortterm victory is valuable, turning everything into a race for pleasure in this world. From this point onward, economics becomes entirely wrong-headed, portraying artificial as natural. Aristotle explains that wealth derives its value from its contribution to the acquisition of other goods as needed for their own sake. Wealth and external or exterior goods are instruments that facilitate virtuous activity and eudaimonia, are a means to an end, and have some natural limit with respect to each individual. Contrast this to the current understanding of the same concept Aristotle uses, such as, ‘good life’, ‘luxury’ and ‘pleasure’. The scientific interpretation is not only distorted because it’s purpose is to justify Utilitarianism which Aristotle was NOT (not even Epicurius was), but because it completely ignores the implications this would have (i.e., would contradict) on Aristotle’s entire life philosophy. Aristotle says that the polis exists for the sake of the good life, that the polis is a partnership in living well, and that mutual interaction is the bond that holds society together. We have already seen how the term ‘good’ applies against a universal standard, rather good that is only in the short term. Aristotle observed that people are related to each other through the medium of

goods but that acquisition beyond the necessary diverts the citizen’s capacities from the sphere of polis life. Advocating an inclusive-end teleology, Aristotle endorsed an active life devoted to a wide range of intellectual and moral perfections including the active engagement in civic affairs. The society in question is far from what is perceived today as a democratic society, where government officials are totally detached from moral obligations (Islam, 2018). In the Politics, Aristotle advanced the synergistic idea of social aggregation with the aggregate benefits to people exceeding the objective total of the benefits to individuals qua individuals. This synergy can only exist in a society that functions based on the model we defined as the knowledge-based (Islam et al., 2013). The order in this society starts from each household, then moves toward the larger community. Aristotle sees this excess amount of benefits as a positive measurement of the goodwill created through association and as a reflection of the unifying strength of a society. In part, it is the mutual benefits of exchange which bring people together with one desiring another’s goods more than he desires his own and vice versa. In the Politics, Aristotle delineates the historical development of money from its initial existence as a means of convenient commensurability. The biggest appeal of money for him was that money made it possible to equate what is apparently unequal and non-comparable. As discussed earlier, money, as a common measure of everything, makes things commensurable and makes it possible to equalize them. Aristotle says that money, as a common measure of everything, makes things commensurable and makes it possible to equalize different goods. He states that it is in the form of money, a substance that has a telos (purpose), that individuals have devised a unit that supplies a measure on the basis of which just exchange can take place. Aristotle thus maintains that everything can be expressed in the universal equivalent of money – indeed, money itself was introduced with the purpose of satisfying the requirement that all items exchanged must be comparable in some way. The most crucial aspect of Aristotle’s understanding was money was not separate or detached from the purpose. As such money for Aristotle was integral to the intention of individuals. This point was reiterated by Zatzman and Islam (2007) who reported a delinearized history of money. Their characterization of money is in line with Aristotle’s framework, which defined the characteristics of a good form of money as: 1. It must be durable. Money must stand the test of time and the elements. It must not fade, corrode, or change through time. 2. It must be portable. Money holds a high amount of ‘worth’ relative to its weight and size. 3. It must be divisible. Money should be relatively easy to separate and re-combine without affecting its fundamental characteristics. An extension of this idea is that the item should be ‘fungible’4. 4. It must have intrinsic value. This value of money should be independent of any other object and contained in the money itself. On the European track, following a buildup of their incorrect interpretation of Aristotle branching out of their incorrect first premises, money has evolved around the disinformation that money can be engineered to assign a value of choice, thus disconnecting the intrinsic value

of money, as correctly envisioned by Aristotle. The most significant departure was marked by assigning money the trait of a commodity. This helped spiralling down from the concept of money as a standard for comparing commodities to money as the most precious commodity. Aristotle discusses the entire range of commodity exchange including barter, retail trade, and usury, and declares that the first type of exchange, barter, the direct non-monetary exchange of commodities, is natural because it satisfies the natural requirement of sufficiency. This type of natural exchange of good is motivated by need and not greed, from the perspective of both consumers and sellers. After direct working of the land, barter between households is the next most natural means of trade. For Aristotle, natural exchange is based on the right to property being determined by the capacity for its proper use. He sees barter as natural but inadequate because of the difficulty of matching households with complementary surpluses and deficiencies. The concepts of surplus and deficiency are normative and derive from the right of property. Aristotle is irresolute and ambivalent regarding the second form of exchange, which involves the transferring of goods between households but mediated by money. Here each participant starts and ends with use value which, each party approves of but the item is not being used in its natural aim or function because it was not made to be exchanged. As we will see in the latter section, this in itself makes a process unnatural and obscures the path of sustainability. Aristotle observes that what is natural is better than what is acquired and that an item that is final is superior to another thing that is wanted for the sake of this item. The introduction of money eliminates the problem of the double coincidence of wants. For Aristotle, the legitimate end of money is as a medium of exchange but not as wealth or as a store of value. He observed that money became the representation of want by agreement on law. A currency acceptable within the polis permits the full potential to be realized. The underlying assumption of this, however, is that the polis does have a consensus and there exists a standard for law. Aristotle thought that money departs from its natural function as a medium of exchange when it becomes the beginning and end of exchange with no limit to the end it seeks. The ease of exchange permitted by the use of money makes it possible to engage in large production projects for exchange purposes instead of for direct household use. This can corrupt natural exchange for which money is a valuable instrument. Money, rather than serving simply to facilitate commodity exchange, can become the goal and end in itself. This is what Zatzman and Islam (2007) termed as making money for sake of making money and designated as the source of economic downfall. In the third form of exchange, retail trade, a person buys in order to sell at a profit. Retail trade is concerned with getting a sum of money rather than acquiring something that is needed and therefore consumed. Whereas Aristotle views household management as praiseworthy and as having a natural terminus, he is skeptical about retail trade because it has no natural terminus and is only concerned with getting a sum of money. Retail trade knows no limits. When money becomes an intermediate element in exchange, the natural limits on physical wants no longer exercise restraints on a person’s desires. The lack of effective natural restraints makes the

process of exchange vulnerable to a drive for wealth accumulation. There exist no natural conditions restricting a person’s desire to acquire money wealth. However, artificial restrictions, for instance as in a communist society, would restrict the economy and remove the benefit of a free market economy. For Aristotle, retail trade is not a way of attaining true wealth because its goal is a quantity of money. He criticizes money-making as a way of gaining wealth. The end of retail trade is not true wealth but wealth as exchange value in the form of a sum of money. Aristotle observes that exchange value is essentially a quantitative matter that has no limit of its own. He says that it is from the existence of wealth as exchange value that we derive the idea that wealth is unlimited. The conflation of wealth and resource has led to the perception that it is natural to accumulate wealth. In Book V of the Nicomachean Ethics, Aristotle states that commodity exchange between craftsmen is a natural but inferior form of exchange that is not closely connected to polis life. He says that craftsmen are involved with specialized production based on unlimited and unnatural acquisition, are not the equals of household heads, and are therefore unsuited for citizenship and for polis life. We will see in a latter section how this ‘inferiority’ of exchange between craftsmen is derived from the fact that without going through the standard of money, the exchange rate is not clear. The fourth form of trade is usury – the begetting of money from money. Aristotle says that the usurer is the most unnatural of all practitioners of the art of money-making. The lending of money at interest is condemned as the most unnatural mode of acquisition. Aristotle insisted that money was barren. He did not comprehend that interest was payment for the productive use of resources made available by another person. Zatzman and Islam (2007) demonstrated that usury or interest creates an aphenomenal starting point and launches the economic collapse (Figure 2.3). In essence, the European version of economics has become the justification of Pragmatism and Utilitarianism. Similar to Aristotle’s vision of economics concerning both the household and the polis, modern economics is deeply involved in politics, which can be more appropriately called ‘politicking’. As Table 2.1 shows, all of the premises of Aristotle were converted to economic principles that are inherently false and only serve to justify false premises that emerged from conflation. The possibility that this disinformation was deliberate cannot escape an inquisitive mind. Table 2.1 Fundamental premises of Aristotle in relation to economic theories. Aristotle’s premise Economics is the science of matters that are ‘useful’ Human action is

New science’s conflation Useful = desirable

Aphenomenal conclusion Economic principle

Labour = tangible

Tangible outcome is the

Desirability dictates value

Create desire to create value, hence profitability Evaluate labour

unique and morality relates to human action

expression, disconnected of conscience or intention – the essence of morality Objective of human ‘eudaimonia’ (human actions is flourishing or long‘eudaimonia’ term or intangible success) = happiness Diversity is a trait of Diversity = inequality nature

measure of success

based on profitability

Tangible trumps intangible

“In the long run, we’re all dead”

Inequality and economic disparity is natural

Economic extremism is unavoidable Humans have dual Dual nature = fear and It’s natural to maximize Culture of ‘sex and natures (tangible and lust pleasure and minimize fear’ sell, intangible) pain maximizing profit Practical science and Knowledge of reality Gathering knowledge is Purpose of speculative science = tools of control of for control of reality information is to for knowledge of reality maximize profit reality Survival of the fittest, fear Fear sells of extinction Love of good life Good = whatever Fear and lust Create fear and maximizes pleasure at temptation to present time and maximize profit avoids pain Reality is objective Reality = perception Reality is subjective Realiity can be politics created to maximize profit Value determined by Useful = desirable Purpose of life is Desire creates usefulness (based on fear and maximizing pleasure and profit lust) minimizing pain Demand based on Wants = desires Humans are but for their Demands matches wants desires; Desires are perception and thus limitless profitability Needs are natural Need = desire Desire is natural, Manipulate desire addiction is a personal and invoke liability addiction to increase sale Natural disasters Perception is natural, If it exists, it must be Declare anything affect pricing and perception driven natural, and it natural creates perception supply and demand market is natural

standard

Gold = currency

There has to be a universal (space), time honored, and objective standard

Standard is whatever agreed upon

of reality Standard can fluctuate and Value of currency varied with interest rate moves with interest for instance. rate Perception dictates standard, which can be created subjectively

Any value can be added to the standard

No trade for Equivalency = incommensurate equality goods, i.e., goods must be equivalent for them to be exchanged Everything can be Standard = money reduced to a standard that measures its value

A common dimension can be found and goods can be compared based on that dimension

As long as the quality in a certain dimension is established, it can be sold as ‘value’

Synergy occurs in a society based on good will

Synergy can exist just with ethical standards. Synergy can be manufactured by thrusting ideals to the consumer

“Money” is a matter of perception that can be manipulated to control value of goods Exchange of goods Voluntary is without a As long as agreed, it’s Invest in must be voluntary moral compass morally/ethically advertisement to acceptable alter mindset Voluntary exchange is Exchange is subjective As long as both parties Egregious just agree, the exchange is just economic imbalance is just and part of natural cycles There is a natural Natural is whatever Desire is natural Buy to satisfy cause behind comes naturally, desire economic exchange therefore need = desire Society functions Selfishness or short- Seeking self interest in the Maximize with a higher term gain is higher and short-term is natural consumption in purpose. than long-term order to increase sales moral standard = ethical standard

Money or price defines value

Market price is a product of engineering perceptions

Money helps build Money is a commodity Want and desire of money commensurability of is natural diverse commodities Trading is motivated Overall good is the Trading is motivated by by overall good accumulation of wealth accumulation personal good

Gaining wealth Money is money and through money natural is whatever making is not natural produces desired outcome Retailing is wealth generation through ‘money for sake of money’ scheme Money is barren, usury unnatural

Wealth = resource, hence unlimited

Money is a commodity, interest is natural and correlated with profit Accumulating wealth Purpose is without a noble independent of noble purpose is not natural cause

Investing in money for sake of creating money is natural Hording and accumulation of goods increase economic development Usury is the fastest way to Loaning the same increase wealth money to many recipients can maximize wealth accumulation Wealth doesn’t have to be Wealth is useful disconnected from real resource Benefit of money can be fixed

Selfishness is noble and natural

Value of wealth can be assigned by the financial establishment There is no difference between top down or bottom up economy.

Aristotle’s work wasn’t all lost during the new science era. The work of Mises (1912) is practically a renewal of Aristotle’s original views about economics. Austrians have frequently criticized neoclassical economics for the unrealistic character of its assumptions – the ones we point out in this section, no other economists have picked up on Aristotle’s original premises. That includes those economists that actually supported Aristotle but opposed. For instance, Friedman (1953) defended the use of unrealistic models against Mises’ criticisms, on the grounds that any good explanatory theory must be abstract, and abstractions by their very nature are unrealistic. The parallel between Mises’s criticism of a priori ethics and Friedman’s criticism of Mises’s own a priori economics is striking—and should lead us to suspect that Mises has here fallen into Friedman’s own confusion between the private character of an “inner voice” and the public character of logic (Long, 2006). It turns out none of the subsequent economists could put together a consistent economic theory and each of their theories suffer from severely dogmatic assumptions although each could explain certain aspects of modern economics. In general, Aristotle’s economic criticisms are directed at wealth gathering for the purpose of

“money-making”. He is criticized for disregarding the fact that men were able to search for unlimited wealth even before ‘money’ came into existence. The modern scholarship has failed to see the matter is not about money as a currency and Aristotle was opposed to generate wealth for sake of accumulating wealth, detached from moral responsibility toward the society at large. When Aristotle called the lust for money or unfair trade (e.g. involving usury) as unnatural, European scholars took it as though Aristotle was opposed to accumulating wealth irrespective of nobility of the purpose. Although he realized that wanting too much is a human failing, he placed a great deal of blame on money because it had no natural terminus. It is not certain if it was Aristotle’s misunderstanding or the European addition that the purpose of money became detached from the original meaning of money, or ‘good’ money. Aristotle taught that when a man pursues wealth in the form of exchange value he would undermine the proper and moral use of his human capacities. It seems to be accurate considering that Aristotle understood such failing occurs only when a man is accumulating money for sake of money. Nevertheless, European scholars continue to pander the ‘virtue’ of money stating that men of commerce provide useful public service and make money only if they do so. Today, value has been assigned to fakest of money, i.e., information that is entirely derived from disinformation and planted stories. All of the latest Nobel prizewinners in economics have involved the process of turning information into a redeemable asset – in fact – the only asset, all using the aphenomenal model depicted in Figure 1(b). Singaljan (2018) talks about the peril of this aspect of information age. A story involving a Harvard professor’s comment that went viral with the ultimate twist, with the headline from the neo-Nazi Daily Stormer website that ran an article headlined, in part, “Harvard Jew Professor Admits the Alt-Right Is Right About Everything.” A tweet of the video published by the self-described “Right-Wing RabbleRouser” Alex Witoslawski got hundreds of retweets, including one from the white-nationalist leader Richard Spencer. This popular example gives one an idea on how any information can be tweaked to fit the agenda that has nothing to do with reality.

2.3 Transition of Money Throughout history, money had the expressed use of commensurability, but we have seen the adaptation of various forms of money that has reduced money from a useful tool to a tool for economic collapse. Here are some examples with relative merits denoted. In the European track, thousand years ago, the ownership title of a land parcel or a business was mainly for registry with tax consequence. In the European economy, the oldest stock certificate was issued in 1606 for a Dutch company (Vereinigte Oostindische Compaignie) seeking to profit from the spice trade to India and Far East. Even though very profitable in its day, when the company was dissolved in 1799, it was some 10 million Dutch guilders in debt. American Stock exchanges were introduced in the early 18th century and wasn’t prominent until the 19th century, where we saw globalization expanded massively with computer technology, air travel, transcontinental pipelines, and giant cargo ships. Today over 50% of US households own stocks collectively worth over $10 trillion. It is only in the last 20 years that an average person can access instant world news and buy stocks online. Hundreds of millions

of people around the world own publicly traded stocks collectively worth over $40 trillion. Over trillion dollars worth of US mortgages have been securitized and owned by world citizens. Title certificates to commodities stored around the world are changing hands valued in the hundreds of billions of dollars on various commodity exchanges. Table 2.2 shows how gold is the only standard of money that fulfills all requirements as envisioned by Aristotle. Long before dollar, coins (not gold or silver) were introduced to replace the gold standard. These currencies as well as dollar have no intrinsic value (Table 2.2). By contrast, a real estate investment trust (REIT) does have real asset attached to it. These companies own many types of commercial real estate, ranging from office and apartment buildings to warehouses, hospitals, shopping centers, hotels and timberlands. The law providing for REITs was enacted by the U.S. Congress in 1960 (REIT, 2018). The law was intended to provide a real estate investment structure similar to the structure mutual funds provide for investment in stocks. Unlike gold, REIT can generate income due to its inherent usefulness as a rental property. REITs are strong income vehicles because, to avoid incurring liability for U.S. Federal income tax, REITs generally must pay out an amount equal to at least 90 percent of their taxable income in the form of dividends to shareholders. However, REITs are not portable, like gold. Table 2.2 Transition money from the gold standard. Durable Portable Divisible Intrinsic value

Gold Dollar Yes Yes Yes Yes Yes Yes Yes No

REIT Yes ? Yes Yes

ETF ? ? Yes Yes (?)

Oil ETF Yes ? Yes Yes (?)

An ETF (exchange-traded fund), is a marketable security that tracks an index, a commodity, bonds, or a basket of assets like an index fund (Investopedia, 2018). Unlike mutual funds, an ETF trades like a common stock on a stock exchange. ETFs experience price changes throughout the day as they are bought and sold. ETFs typically have higher daily liquidity and lower fees than mutual fund shares, making them an attractive alternative for individual investors. Because it trades like a stock, an ETF does not have its net asset value (NAV) calculated once at the end of every day like a mutual fund does. As can be seen from Table 2.2, ETF is considered to have an intrinsic value, although durability and portability are absent. However, because of the fact that the value of ETF depends largely on perception that dictates its trade value, it is hard to call it a real asset. The supply of ETF shares is regulated through a mechanism known as creation and redemption. The process of creation/redemption involves a few large specialized investors, known as authorized participants (APs). APs are large financial institutions with a high degree of buying power, such as market makers that may be banks or investment companies. Only APs can create or redeem units of an ETF. When creation takes place, an AP assembles the required portfolio of underlying assets and turns that basket over to the fund in exchange for newly

created ETF shares. Similarly, for redemptions, APs return ETF shares to the fund and receive the basket consisting of the underlying portfolio. Each day, the fund’s underlying holdings are disclosed to the public. As such, ETFs have become a vehicle for controlling public interest through a select group of ‘money managers’. This process is inherently anti-nature and contrary to the principle of free market economy. Oil, which has always carried intrinsic value but difficult to store and exchange for other goods, all of a sudden became a viable medium of exchange and store of value through the advent of Oil ETF (Bennett, 2014). The United States Oil Fund was founded in April 2006 by Victoria Bay Asset Management, now known as United States Commodity Funds and the American Stock Exchange. The fund opened on its first day of trading at about $70 per share. By early 2007, it was at approximately $50 per share. In mid-2008, it peaked at $119 per share. Then in early 2009, it set a low of $24 per share. In late 2013, it was at about $34. The price slid in the second half of 2014, and went below $10 per share in early 2016. Fiat paper currencies have been popularized over modern age. However, the concept of fiat currency was started with the introduction of non-silver or non-gold currency, the paper currency being a more convenient version of the metallic currency. As shown in Table 2.2, fiat currency does not have any intrinsic value and its value is entirely derived from legal tender laws. The compliance of such law rests on the credibility and strength of the issuing authority. This notion itself is troublesome because no government is immune to becoming obsolete and any suggestion that certain country would never suffer from this shortcoming is illogical. History tells us that any currency for it to be real it must be time-tested irrespective of which government is in power. That notion leads to the fact that a currency cannot be a standard unless it has intrinsic value5. This ‘value’, however, is not in the so-called usefulness, but rather in its ability to be taken as standard over all epochs. Today, even 30-year-old Afghan currency has no value and no government can guarantee anything in return of even an equivalent of $1 million. However, if one can find a currency from the Ottoman era (even 100 years ago), it would be as valuable as the time it was minted. Of course, it is because only gold and silver were used to produce currencies and the role of the government was limited to guaranteeing the quality and amount of gold/silver (such was the case so counting rather than weighing coins would suffice). What Aristotle described as good money 2,000 years ago has not changed, sound money must be a good medium of exchange as well as a store of value. Assets such as oil or land once weren’t considered to be good forms of money due to poor physical or liquidity constraints, have received renewed interest (Zatzman and Islam, 2007). The internet and various pooled products (ETF) on world markets enabled those once immobile and/or illiquid goods to be transacted with ease, speed, transparency and low cost amongst world buyers and sellers. However, the intrinsic value of ETF depends on the perception at a given time. As such, ETF standard is highly time sensitive and scientifically cannot be stated to have intrinsic values. Even gold today has been rendered into a commodity, whose value fluctuates depending on the stock market. The same applies to oil barrels. What stands out for oil is that it drives the

energy sector, which drives modern civilization. So, oil is in fact more useful than gold, but at the same time it is a consumable, making it a non-standard. One couldn’t treat oil as money since it was not exactly durable and portable. Neither could one use a business (such as a restaurant) as money since it is hardly divisible and ever lasting. Gold has been the choice of money for over 5,000 years because it is valuable, durable, divisible and relatively portable. Zatzman and Islam (2007) introduced the concept of UMU – universal monetary unit, which could reverse the continuous decline in economic health of the current society. Overall, the standard of money or wealth has seen the following HSSAN (Honey → Sugar – Saccharine → Aspartame → Nothing) degradation, in the form of Gold → coin → paper → Bitcoin. However, the mere mention of this modus operandi applied by others, who want to turn the nothing into gold draws anger of the establishment. In March, 2018, Bitcoin plunged as social media cracked down on cryptocurrency ads. This is soon after the Facebook scandal that erased $100 billion of Facebook’s value (La Monica, 2018). The year that emerged from $20,000 as Bitcoin price saw the price drop to below $8000. At this point the treat from Largde, the IMF chief came in the phrase: Bitcoin regulation is ‘inevitable’, thus threatening to erase the biggest strength of Bitcoin. In this decline, the latest component is the introduction of Bitcoin. Bitcoin is a digital currency as opposed to physical currency. The Bitcoin website tells us that Bitcoin is a pseudoanonymous, P2P technology operating with no central authority or banks, it is open-source, public, owned by no one and open for everybody to take part. “Bitcoin is the leader in a new generation of emerging currencies known as ‘cryptocurrencies’ which aim to, among other things, facilitate the movement of money electronically while still maintaining a sense of privacy” (Hobson, 2013). A translation of this would be that Bitcoin is the perfection of artificial economics that replaces with gold with complete subjective value. Bitcoin is often compared with gold, and one of the chief factors of similarity it the way they’re both obtained. Similar to gold, Bitcoins are created via the process called “mining.” The problem is, there is nothing real about this ‘mining’ or this ‘coin’. Bitcoin stores no personal data and after the initial decision of 21 million bitcoin, every four years, this reward is halved meaning that no more than 21 million bitcoins will ever be produced. The purpose of such a “finish line” is to mimic the finite quantity of a resource such as gold. To crunch some numbers: 3600 bitcoins are produced daily, each Bitcoin is worth around $100 making the mining industry itself a profession. Some short-term advantages of Bitcoin are that they are transferred directly from person to person, fees are much lower, they can be used in any country and accounts cannot be frozen and no prerequisites or arbitrary limits exist. Also, it is important to note that since Bitcoins are produced without the involvement of governments or banks, they avoid taxes. Finally, the cap of 21 million bitcoins has driven the value of a single coin up as discussed below. As shown in Figure 2.4 The number of Bitcoin transactions have risen sharply during the first 5 years of inception, followed by milder rise. Such transactions have no value in terms of

original definition of economics and trade. In the meantime, the number of businesses accepting bitcoin continues to increase. In January 2017, NHK reported the number of online stores accepting bitcoin in Japan had increased 4.6 times over the past year. BitPay CEO Stephen Pair declared the company’s transaction rate grew 3× from January 2016 to February 2017, and explained usage of bitcoin is growing in B2B supply chain payments (website 1).

Figure 2.4 Rise of bitcoin transactions. The entire discuss on ‘economics’ now hovers around the fact that miners currently observe a 1mb limit on the overall size of each Bitcoin block, which constrains Bitcoin to around 300,000 transactions per day. It is being professed that a free market in Bitcoin must develop. Already, dramatically rises in miner fees are evident. Figure 2.5. is a chart of the miner fees BitPay has paid each month over the year 2016-early 2017. This chart does not show miner fees paid by people paying a BitPay invoice or fees paid by users paying from a Copay or BitPay wallet. These are the fees BitPay has paid to transfer its own bitcoin. The fees are normalized to USD to factor out the rising price of Bitcoin. Over the time frame shown, the monthly miner fee expenditure has risen 35-fold. Even after factoring out our transaction growth of nearly 3-fold, it is still nearly a 12-fold increase. This exponential rise is being tagged as natural and an equilibrium growth rate is expected.

Figure 2.5 Miner fee rise in 2016–2017 (from website 1). It is believed by some that that an alt-coin will rise and take over. That might be an unlikely eventuality, but the fact remains that a market is forming between on-chain, mining-secured payments and off-chain, more conventionally secured payments. An off-chain payment could take the form of an alt-coin transaction, but the reduced security, increased volatility, and lack of liquidity of an alt-coin is going to make that a less attractive option. Bitcoin gains more legitimacy among lawmakers and legacy financial companies. For example, Japan passed a law to accept bitcoin as a legal payment method, and Russia has announced that it will legalize the use of cryptocurrencies such as bitcoin. In the meantime, Norway’s largest online bank, Skandiabanken, already integrated bitcoin accounts. In March, 2017, the number of GitHub6 projects related to bitcoin passed 10,000. Exchange trading volumes continue to increase. For the 6-month period ending March 2017, Mexican exchange Bitso saw trading volume increase 1500%. During the period of January through May 2017 Poloniex saw an increase of more than 600% active traders online and regularly processed 640% more transactions. Up until July 2017, bitcoin users maintained a common set of rules for the cryptocurrency (Popper, 2017). On 1 August 2017, bitcoin split into two derivative digital currencies, the 1MB blocksize legacy chain bitcoin (BTC) and the 8MB blocksize hard fork upgrade Bitcoin Cash (BCH). The split has been called the ‘Bitcoin Cash hard fork’ (Smith, 2017). As stated earlier, the moment other than the main players of the establishment start to use the same scheme, opposition grows. Bitcoin is no exception. As early as December 2017, the software marketplace Steam announced that it would no longer accept Bitcoin as payment for its products, citing slow transactions speeds, price volatility, and high fees for transactions (Anonymous, 2017; Dinkins, 2017). On 22 January 2018, South Korea brought in a regulation that requires all the bitcoin traders to reveal their identity, thus putting a ban on anonymous trading of bitcoins (Chapman, 2018). On 24 January 2018, the online payment firm Stripe

announced that it would phase out its support for bitcoin payments by late April 2018, citing declining demand, rising fees and longer transaction times as the reasons (BBC, 2018). Figure 2.6 shows exponential rise of bitcoin prices in recent years. The close-to-linear graph represents the logarithmic scale (below). Even though they were launched, Bitcoin was not traded on any exchanges in 2009. Bitcoin’s first recorded price was in 2010. Technically, Bitcoin was worth $0 in 2009 during its very first year of existence! Here comes the 0 value foundation concept in the HSSAN scale. In 2010, the highest price for the year was just $0.39. That was the beginning of superficial introduction of Bitcoin as a standard of wealth. In 2018, this price hovers around $8500. Bitcoin’s price is measured against fiat currency, such as American Dollars (BTCUSD), Chinese Yuan (BTCCNY) or Euro (BTCEUR). Thus, Bitcoin appears superficially similar to any symbol traded on foreign exchange markets. What is special about Bitcoin is that there is no official Bitcoin price, the price being dictated entirely by various averages based on price feeds from global exchanges. Bitcoin Average and CoinDesk are two such indices reporting the average price. Bitcoin price is thus a function of supply and demand, the majority of supply being controlled by early adopters and miners. This process closely simulates gold and as such Bitcoin was designed to have a fixed supply of 21 million coins, over half of which have already been produced. The most famous of these is Bitcoin’s creator, Satoshi Nakomoto. Satoshi is thought to hold one million bitcoins or roughly 4.75% of the total supply (of 21 million). If Satoshi were to dump these coins on the market, the ensuing supply glut would collapse the price. The same holds true for any major holder. However, any rational individual seeking to maximise their returns would distribute their sales over time, so as to minimize price impact.

Figure 2.6 Price of Bitcoin from inception to the end of 2017. With the current mining reward of 12.5 BTC per block solution, Bitcoin supply is inflating at around 4% annually. This rate will drop sharply in 2020, when the next reward halving occurs. That Bitcoin’s price is rising despite such high inflation (and that it rose in the past when the reward was 50 BTC!) indicates extremely strong demand. Every day, buyers absorb the thousands of coins offered by miners and other sellers. A common way to gauge demand from new entrants to the market is to monitor Google trends data (from 2011 to the present) for the search term “Bitcoin.” Such a reflection of public interest tends to correlate strongly with price. High levels of public interest may exaggerate price action; media reports of rising Bitcoin prices draw in greedy, uninformed speculators, creating a feedback loop. This typically leads to a bubble shortly followed by a crash. Bitcoin has experienced at least two such cycles and will likely experience more in future. The reason the history of Bitcoin and the problems with it, practical and theoretical, are being covered, is to explain how it is commensurate to the notion of philosophical subjectivity, where reality is deemed a function of one’s desires and views about the world. The fact of the matter is that Bitcoin is ultimately worth what people will buy and sell it for. This is often as much a matter of human psychology as economic calculation. Typical of the current definition of reality, i.e., perception creates reality, individual perception, which is mainly driven by greed and fear, dictates the Bitcoin price. This defines an economic system, in which all exchanges and transactions are built on greed and fear, completely detached from need as originally envisioned by Aristotle.

It is no surprise that Bitcoin has crashed through a series of milestones in 2017, despite warnings of a potential bubble. After starting the year below $1,000, it hit $8,000 for the first time in early November and topped $11,000 just later on. Much of the stunning ascent has been driven by the expectation that big, professional investors are set to start trading it. It has also been propelled by mom-and-pop investors who do not want to miss its meteoric rise. Based on individual perception, people have been bidding its price higher even though leading figures in finance and economics are telling them to beware. Nobel laureate Joseph Stiglitz said in November, 2017 (Costelloe, 2017) that bitcoin “ought to be outlawed.” Criticism has also come from the likes of JPMorgan Chase (JPM) CEO Jamie Dimon and legendary investor Warren Buffett. The economists’ warning came at the heel of the event when Bitcoin surpassed $11,000 in a matter of hours after hitting the $10,000 milestone, taking 2017 price surge to almost 12-fold as buyers shrugged off increased warnings that the largest digital currency is an asset bubble. “It’s a bubble that’s going to give a lot of people a lot of exciting times as it rides up and then goes down,” stated, Stiglitz. One would think a conscientious economist would pick on the fact that there is nothing real about Bitcoin and it launches economics to the wrong direction, but that was not the case. Stiglitz, as well as others, worried about the ability to regulate Bitcoin prices. Added to the worry was the fact that there would be hackers that are more proficient in manipulating the internet-driven economy than the Establishment itself. Soon, it was discovered that “highly professional” hackers last stole around 4,700 Bitcoin valued at nearly $80 million from NiceHash, a leading mining service based in Slovenia (The Guardian, 2017),. The company said the attack was probably made from an IP address outside the European Union. Local and international authorities are investigating, NiceHash CEO said. The incident was one of at least three dozen heists on exchanges that buy and sell digital currencies since 2011. Mt. Gox, once the largest Bitcoin exchange, collapsed in 2014 after being robbed of more than $470 million. Other Bitcoin exchanges have faced criminal charges of money laundering. Recent report in Forbes (Kroll, 2018) shows how the modern monetary and exchange policy has created extreme divergence between the rich and the poor. In a sign of the rich getting richer, in 2018, altogether the world’s billionaires are worth a record $9.1 trillion, up 18% from a year ago (Kroll, 2018). Many of the individuals on Forbes’ 2018 World’s Billionaires list got rich by handling other people’s money. The finance and investments industry— including private equity owners, hedge fund managers and discount brokers—helped produce more billionaires than any other. Altogether the sector held 310, or around 14%, of the fortunes on our 2,208 person list. Billionaires in finance topped the ranks in countries ranging from Brazil to Indonesia. Of these, 24 were newcomers. That includes the first-ever cryptocurrency billionaires, Chris Larsen and Changpeng Zhao (known as CZ), as well as Canadian Stephen Smith, who launched his mortgage lender just four years after being personally bankrupt (Seddiq, 2018).

2.4 The Nature Science Tract of Economics History

It is known that exchange of person possessions in order to improve the value of one’s life is synonymous with human civilization. In every nation, ancient stories tell us the nature of this exchange of livestock, household goods, and in many societies crops. This bartering system had the most essential component of economics, viz. there necessarily double coincidence of needs or wants. If there are two parties in a transaction, A party and B party, B should not only have what party A wanted, but must also be in need of what A could offer in exchange. At this stage, such bilateral dependence ensured social justice, but such bartering system was restrictive of transactions that could be made, consumed more time and curtailed specialization (Hasan, 2011). Outside of nomadic traditions, any trading practices required a standard that would help commensurate various goods that are not easily identified as an object of want between two parties. This notion of a standard that in itself is not as valuable as its convenience to render various goods commensurate is as old as human civilization, dating back to ancient time. In fact, there is no evidence of a society or economy that relied primarily on barter (Mauss, 2002). It is recognized that non-monetary societies operated largely along the principles of gift economy and debt (Graeber, 2011). In India, cow dung was used as standard (Picture 2.1). The implication that a standard itself should have usefulness in addition to intrinsic value is clear, considering that a large segment of the society used to use dried cow dung for cooking and heating. The Mesopotamian shekel was a unit of weight, and relied on the mass of something like 160 grains of barley (Kramer, 1988). The first usage of the term came from Mesopotamia circa 3000 BC. It is difficult to ascertain when first coins were introduced as a standard of wealth. However, the process of smelting was invented during the time King Solomon (Khan and Islam, 2016). Once again, the timeline is not definitive. Some suggested that the first stamped coins were minted around 650– 600 BC (Picture 1.2). This coin type, made of a gold and silver alloy, is believed to be world’s first, minted by King Alyattes in Sardis, Lydia, Asia Minor (present-day Turkey), c. 610–600 BC. It can be attributed, among other ways, as Weidauer 59–75 (Type 15), as claimed by Coinproject.com.

Picture 1.2 Lydian electrum trite (4.71g, 13x10x4 mm), made from gold or silver (From Coinproject.com). The coins in Picture 1.2. represents gold and silver coins minted sometime around 600 BC in Lydia, Asia Minor (current-day Turkey). It is often dubbed as the “mother of all coinage” (Porteous, 1980). The Mycenaean civilization also widely used gold coins, as did the later Greek and Roman Empires, although silver was the more usual material used (Porteous, 1980).

It appears from historical relics gold has been a symbol of wealth and an undisputed standard for measuring wealth as well as for commensuration of diverse goods, with unclear equivalency. In terms of usefulness, it has always been used as a decorative covering, gold plate and gold leaf have been used to decorate shrines, temples, tombs, sarcophagi, statues, ornamental weapons and armour, ceramics, glassware and jewelry since Egyptian times or beyond. Perhaps the most famous example of gold leaf from antiquity is the death mask of King Tutankhamun. Gold is also useful for its malleability and incorruptibility. For instance, it has also been used in dental work for at least over 3000 years. The Etruscans in the 7th century BCE used gold wire to fix in place substitute animal teeth. As thread, gold was also woven into fabrics. Gold has also been used in medicine, for example, Pliny in the 1st century BCE suggests gold should be applied to wounds as a defense to ‘magic potions’. Fast forwarding to Information, such applications continue to promoted (Dykman and Khlebtsov, 2011). Indeed, gold has seen widespread applications in various forms and research continues on biomedical applications, including genomics, biosensorics, immunoassays, clinical chemistry, laser phototherapy of cancer cells and tumors, the targeted delivery of drugs, DNA and antigens, optical bioimaging and the monitoring of cells and tissues with the use of state-of-the-art detection systems. It is becoming progressively clear that gold has been in usage much earlier than formerly thought. Pappas (2016) indicates that a Stone Age woman found buried outside of London wore a strand of gold around her neck; Celts in the third century B.C. wore gold dental implants; a Chinese king who died in 128 B.C. was buried with gold-gilded chariots and thousands of other precious objects. Islam et al. (2010) indicated that we do not have sufficient evidence to pinpoint the chronology of gold mining and processing. An equally important notion is the rarity of gold. Chemically speaking, gold is a transition metal. Transition metals are unique, because they can bond with other elements using not just their outermost shell of electrons (the negatively charged particles that whirl around the nucleus), but also the outermost two shells. This happens because the large number of electrons in transition metals interferes with the usual orderly sorting of electrons into shells around the nucleus. It is unanimously accepted that gold is rare. As of 2016, central banks of various countries reportedly owned 32,754 tonnes, or about 17.8 percent of the total amount of gold ever mined (Holmes, 2016). Table 2.3 shows gold reserve held by different countries.

Table 2.3 Gold reserve held by various countries (data from Holmes, 2016). Ranking 1 2 3

Country United States Germany Italy

Tonnes of gold Percent of foreign reserves 8,133.5 74.9 3,381 68.9 2,451.8 68

4 5 6

France China Russia

2,435.7 1,797.5 1,460.4

62.9 2.2 15

7 8 9 10

Switzerland Japan Netherlands India

1,040 765.2 612.5 557.7

6.7 2.4 61.2 6.3

Table 2.4 shows Top 20 according to World Gold Council’s latest rankings (as of March 2018).

Table 2.4 Top gold holding countries. Rank 1 2 3

Country/organization Gold holdings (in tons) Gold’s share of forex reserves United States 8,133.5 75.3% Germany 3,373.6 71.0% International Monetary Fund 2,814.0 N/A

4 5 6

Italy France Russia

2,451.8 2,436.0 1,857.7

68.3% 64.6% 17.9%

7 8 9 10 11 12 13 14 15 16 17 18 19 20

China Switzerland Japan Netherlands Turkey India European Central Bank Taiwan Portugal Saudi Arabia United Kingdom Kazakhstan Lebanon Spain

1,842.6 1,040.0 765.2 612.5 582.2 558.1 504.8 423.6 382.5 322.9 310.3 303.0 286.8 281.6

2.4% 5.5% 2.6% 67.5% 21.9% 5.8% 28.7% 3.9% 63.6% 2.8% 8.6% 41.3% 22.2% 17.3%

Gold is so rare that its rarity cannot be explained with the conventional geological theory of the earth. If one were to believe the conventional notion that during the formation of the Earth, molten iron sank to its center to make the core, the majority of the earth’s iron-loving precious metal should have sunk to the core. Consequently, the outer portion of the Earth should have been deprived of precious metals. However, precious metals are tens to thousands of times more abundant in the Earth’s silicate mantle than anticipated. It has previously been argued that this serendipitous over-abundance results from a cataclysmic meteorite shower that hit the Earth after the core formed. The full load of meteorite gold was thus added to the mantle alone and not lost to the deep interior. This hypothesis explained the rarity of gold, albeit with dogmatic assertion. In 2011, Willbold et al. provided ‘evidence’ in support of this hypothesis. An ultra high precision analyses of some of the oldest rock samples on Earth by them ‘established’ that the planet’s accessible reserves of precious metals are the result of a bombardment of meteorites more than 200 million years after the Earth was formed. They analyzed rocks from Greenland that are nearly four billion years old. These ancient rocks

provide a unique window into the composition of the Earth our shortly after the formation of the core but before the proposed meteorite bombardment. They determined the tungsten isotopic composition of these rocks. Tungsten (W) is a very rare element (one gram of rock contains only about one ten-millionth of a gram of tungsten) and, like gold and other precious elements, it should have entered the core when it formed. Tungsten is comprised of several isotopes, atoms with the same chemical characteristics but slightly different masses. Isotopes provide robust fingerprints of the origin of material and the addition of meteorites to the Earth would leave a diagnostic mark on its W isotope composition. These researchers observed a 15 parts per million decrease in the relative abundance of the isotope 182 W between the Greenland and modern day rocks. This small but significant change is in excellent agreement with that required to explain the excess of accessible gold on Earth as the fortunate by-product of meteorite bombardment. This narration, however, creates another paradox. If gold is from meteoric sources, how can one explain its link to outer space? Research group of Stockholm University’s Stephan Rosswog suggested in early 2000’s that gold, platinum and other heavy metals could be formed when two exotic stars — neutron stars — crash and merge. Neutron stars are essentially stellar relics — collapsed cores of massive stars. This hypothesis was later given traction a decade later upon discovery of new telescopic data that ‘detected’ such an explosion, bolstering the notion that gold was made in such rare and violent collisions long before the birth of the solar system about 4½ billion years ago. The research group of Edo Berger of the HarvardSmithsonian Center for Astrophysics. It was claimed in a press release that people “walk around with a little tiny piece of the universe.” This particular work was not published (Berger et al., submitted) but the theory behind it was (Berger et al., 2012). With NASA’s Swift telescope they observed a gamma-ray burst that they claimed to be resulting from the crash of dead stars. The burst, in a distant galaxy, was some 3.9 billion light-years away. They further concluded that the burst lasted only a fraction of a second. Using ground telescopes and the Hubble Space Telescope, Berger’s team noticed an odd glow that lasted for days. Infrared light in the glow could be evidence that heavy elements like gold had spewed out of the cosmic crash. Even though this work didn’t delve into how Earth was sprinkled with gold-bearing meteoric showers, a connection with previous studies that suggested that a meteor shower may have delivered gold and other precious metals to Earth was made. Table 2.5 shows world’s reserve, along with recent production of gold. The World Gold Council estimates that all the gold ever mined totaled 187,200 tons in 2017 but other independent estimates vary by as much as 20% (Prior, 2013). At a price of US$1,250 per troy ounce, reached on 16 August 2017, one tonne of gold has a value of approximately US$40.2 million. The total value of all gold ever mined would exceed US$7.5 trillion at that valuation and using WGC 2017 estimates.

Table 2.5 Gold reserve and Recent production (Data from USGS, 2018).

United States Australia Brazil Canada China

Mine production Reserves 2016 2017 222 245 3,000 290 300 9,800 85 165 453

Ghana 79 Indonesia 80 Kazakhstan 69 Mexico 111 Papua New Guinea 62 Peru 153 Russia 253 South Africa 145 Uzbekhstan 102 Other countries 840 World total (rounded) 3,110

85 180 440

2,400 2,200 2,000

80 80 70 110 60 155 255 145 100 845 3,150

1,000 2,500 1,000 1,400 1,300 2,300 5,500 6,000 1,800 12,000 54,000

Of course, these reserves data are dynamic. They may be reduced as ore is mined and (or) the feasibility of extraction diminishes, or more commonly, they may continue to increase as additional deposits (known or recently discovered) are developed, or currently exploited deposits are more thoroughly explored and (or) new technology or economic variables improve their economic feasibility. Consequently, the magnitude of that inventory is necessarily limited by many considerations, including cost of drilling, taxes, price of the mineral commodity being mined, and the demand for it. USGS reports provide estimates of undiscovered mineral resources using a three-part assessment methodology (Singer and Menzie, 2010). Mineral-resource assessments have been carried by USGS out for small parcels of land being evaluated for land reclassification, for the Nation, and for the world. Figure 2.8 shows historical trends in assessing gold reserve. In analogy to petroleum resources, however, gold reserves are not amenable to regeneration, nor is its usage as the standard conflated with its consumption. Both these are important considerations in petroleum resources (Speight and Islam, 2016). Also, only 5–10% of world gold output is consumed by industry; the remaining 90 to 95 percent is used for monetary purposes. This further consolidates gold’s position as the most suitable standard.

Figure 2.8 World reserve of gold. (from Gold Reserve, Inc.) Figure 2.9 shows US annual gold production, 1840–2012. Gold mining in the United States has taken place continually since the discovery of gold at the Reed farm in North Carolina in 1799. The large-scale production of gold started with the California Gold Rush in 1848. Gold mining has been affected by political events and wars. For instance, the closure of gold mines during World War II by the War Production Board Limitation Order No. 208 in autumn 1942 was a major impact on the production until the end of the war. After Nixon decoupled gold from US dollars, there was a decline in gold production that continued until 1980. US gold production greatly increased during the 1980s, due to high gold prices and the use of heap leaching to recover gold from disseminated low-grade deposits in Nevada and other states. This trend continued until 2000, after which period gold production declined steadily. During the same time gold price rose sharply (Figure 2.10). After 2013, gold price started to drop even though demand of gold continued to rise (Figure 2.11).

Figure 2.9 US Annual gold production. (From California Gold Mining, Inc.)

Figure 2.10 Gold price fluctuations since 1971 decoupling of gold and US dollars (From onlygold.com).

Figure 2.11 Inflation adjusted price of gold (from Rowlatt, 2013). In 2016, the United States produced 209 tonnes of gold, worth about US$8.5 billion, and 6.7% of world production, making it the fourth-largest gold-producing nation, behind China, Australia and Russia. Most gold produced today in the US comes from large open-pit heap leach mines in the state of Nevada. The US is a net exporter of gold. The next most important standard is silver. A USGS Report (2017) shows that US mines produced approximately 1,100 tons of silver with an estimated value of $570 million. Silver was produced at three silver mines and as a byproduct or coproduct from 37 domestic base – and precious-metal mines. Alaska maintains its position as the leading silver-producing State, followed by Nevada. There were 24 US refiners that reported production of commercial-grade silver with an estimated total output of 2,100 tons from domestic and foreign ores and concentrates and from old and new scrap. The physical properties of silver include high ductility, electrical conductivity, malleability, and reflectivity. As compared to gold, silver is useful for mainly industrial applications. In 2016, the estimated domestic uses for silver were electrical and electronics, 30%; coins and medals, 27%; jewelry and silverware, 7%; photography, 6%; and other, 30%. Other applications for silver include use in antimicrobial bandages, clothing, pharmaceuticals, and plastics; batteries; bearings; brazing and soldering; catalytic converters in automobiles; electroplating; inks; mirrors; photovoltaic solar cells; water purification; and wood treatment.

There is one additional role for silver, that is act as a backup for gold. This aspect will be discussed in latter chapters. Table 2.6 Mine production and reserve of various countries.

Mine production Reserves9

United States Australia Bolivia

2015 2016e 1,090 1,100 25,000 1,430 1,400 89,000 1,190 1,300 22,000

Chile China Mexico Peru Poland Russia Other countries World total (rounded)

1,370 1,500 3,100 3,600 5,370 5,600 3,850 4,100 1,180 1,400 1,430 1,400 5,000 5,400 25,100 27,000

77,000 39,000 37,000 120,000 85,000 20,000 57,000 570,000

The world has seen approximately 10 times silver production than gold in keeping with world reserve (both above and underground). The average cumulative world silver production is estimated to be 43 billion ounces as compared to 4.3 billion ounces of gold. Figure 2.12 shows silver production history in USA. Figure 2.13 shows silver reserve in top silver reserve countries.

Figure 2.12 U.S. mine production of silver from 1860 to 2000. Total production prior to 1860 was estimated to be 25 metrics tons (t) (Data from USGS, 2018a).

Figure 2.13 Silver reserve in top silver reserve countries (from Statistia, 2018c). Veins of gold mined from the earth are the result of hot fluids flowing through gold-bearing rock, picking up gold and concentrating it in fractures, according to the American Museum of Natural History (AMNH). Figure 2.14 shows silver-to-gold ratio of last 40 years in the USA. This ratio is fundamental to understanding the relationship between silver and gold. This ratio is one of the many key indicators used by seasoned precious metals holders to help determine the right time for them to buy silver or gold, in light of their portfolio goals. In the past, both gold and silver were made into a commodity, natural supply and demand dictated the price. During the time of prophet Muhammad and the establishment of the young Caliphate, a great deal of emphasis was put on natural supply and demand. An important hadith (verified historical saying) of the prophet is: If you have 200 dirhams, and it has been saved for a year, (you are) obliged to spend 5 dirhams of zakat from it. And there is no obligation of zakat in gold, until you have 20 dinars. If you already have 20 dinars and it has been saved for a year, then the amount of zakat that must be taken from it is 1/2 of a dinar.” (Narrated by Abu Dawood, no. 1391, and authenticated by Al Albani). This hadith sets the silver to gold ratio to 7. There is another hadith: Allah’s Messenger said, “Don’t sell gold for gold unless equal in weight, nor silver for silver unless equal in weight, but you could sell gold for silver or silver for gold as you like.” (Sahih al-Bukhari 2175). This one sets gold as the true standard while allows silver to be floated based on supply and demand or mutual agreement between exchanging parties.

Figure 2.14 Silver/gold price in USA (from https://goldprice.org/gold-price-history.html). Data on silver-gold ratio are available in the modern age, but they do not reflect natural status at least in the post-Renaissance Europe. For much of that history, the silver-to-gold ration hovered around 17.5. The ratio was set at 15:1 by the U.S. government from 1792 to 1833 and increased to 16:1 in 1834 (Fulp, 2016). Figure 2.15 shows the occurrence frequency of various gold silver ratio for 45 years during 1971–2016. The y-axis represents the frequency of months for various ratios (x-axis).

Figure 2.15 Gold/Silver ratio distribution (From Fulp, 2016). The following observations were made by Fulp (2016): A ratio of less than 20 is a rare outlier that occurred for only two months in 1980 when both gold and silver went exponential. Ratios between 20 and 30 are quite unusual at 3.0% of the record. They occurred in the first two months after Nixon decoupled gold from US currencies in 1971; for one month in 1974; for eight months in 1976 when gold corrected and silver was flat; and for five months in late 1979 and early 1980 when both metals were in exponential rise, followed by the collapse of silver. Ratios from 30 to 40 (18.5%), 50 to 60 (19.4%), 60 to 70 (18.8%), and 70 to 80 (20.0%) are common and almost evenly distributed in the price records. The middle increment from 40 to 50 comprises only 12.7% of the total months of study.

Ratios from 80 to 90 constitute 5.2% of the record. Almost all occurred during a three-year period from September 1990 to November 1993 when there was an oversupply of silver stocks, industrial demand was down, and prices languished from $3.65 to $5.00 an ounce. Other than the three-year period mentioned above, there were only two months, in March 1995 and March 2016, when the monthly average ratio was above 80. Ratios greater than 90 make up 2.1% of the record and also occurred during the same 1990–1993 interval. The ratio averaged over 97 in February 1991, a value that was exceeded only in 1939 at the end of the depression and beginning of World War II. From a compendium of sources, the average abundance of gold in Earth’s crust is about 4 ppb while silver is about 70 ppb, for a ratio of 1:17.5. Based on these crustal abundances, silver perma-bulls promote a platform that gold-silver ratios should be less than 20.

Figure 2.16 Showing rarest metals (from Haxel et al., 2002). Figure 2.16 shows abundnace of various elements in earth crust. Note that Iridium is the rarest of elements, yet its properties are not amenable to be a standard. It is an element with an atomic number 77. It is very hard, brittle, has a silvery-white transition metal of the platinum group, and is the second densest element (after osmium). It is also the most corrosion-resistant metals, even at temperatures as high as 2000 °C. Although only certain molten salts and halogens are corrosive to solid iridium, finely divided iridium dust is much more reactive and can be flammable. Similar problems arise with titanium and zirconium. Within the atomic numbers ranging from 75 to 85, gold is the most abundant. It is also the easiest to smelt. For instance, the problem is they are very hard to smelt. One needs to heat a furnace up into the region of 1,000 °C before beginning to extract these metals from their ores. For instance, the

melting point of platinum is 1,768C. On the tangible side, the most attractive feature of gold is that it is golden. Humans have natural affinity to its external appearance (Rowlatt, 2013). There is some indication that what honey stands for human health (Islam et al., 2015), which stands for the human notion of wealth in gold. There is an entire chapter in Qur’an that is named: Az-Zukhruf Al-zukharif (ornaments of gold), talks about the afterlife for the believers that is full “dishes and goblets of gold” (43:71), joy and a garden of bountiful fruit to eat (43:73). It affirms that believers and their spouses will “enter Paradise” (43:70). Likewise, within the atomic number 45–55, silver is the only one that is non-toxic to the environment and human health. Silver has many special properties that make it a very useful and precious metal. It has an attractive shiny appearance, although, unlike gold, it tarnishes easily. The tarnish is silver sulphide and it forms as the silver reacts with sulphur compounds in the atmosphere. Of all the metals, silver is the best conductor of heat and electricity known, in fact it has the highest electrical and thermal conductivity known for any material. It is strong, malleable and ductile, although not as malleable as god. Silver is also able to reflect light very well. Altogether, silver has much more use as a commodity than gold, but remains a second tier for the purpose of monetary standard. Table 2.7 compares some of the distinctive features of gold and silver. Table 2.7 Distinction between gold and silver. Silver Silver is mostly used as an industrial metal (54%);

Gold Gold is a precious metal; 90% is used in jewelry and investments and 10% in industrial applications. Much of that yearly demand is consumed and An estimated 98% of all the gold ever mined a relatively minor amount is recycled. in the world remains available and held in jewelry, by central banks, in private hoards, and as fabricated products. Estimated cumulative word production =1,690,000 tons

Cumulative historic world production of gold is estimated from various sources to be about 182,000 tons. (The ratio is: 9.2:1). About 70% of new silver is a byproduct from Gold is primary driver of silver supplies base metal or gold mines; therefore, silver production is largely dependent on the prices of these primary metals. Fulp (2016) argued that since August 1971 when gold was partially freed from the US dollar on world exchanges, the Earth’s crustal abundances, historic fixed-price relationships, and mine production have had little influence on the gold-silver ratio. Instead, the relative prices of gold and silver are driven by: Industrial demand for silver and the vast amount of available above-ground stocks of silver held by hoarders and speculators.

The health of the world economy and geopolitical events. Central bank transactions and safe haven hoarding of gold. Speculative traders moving in and out of physical and paper markets of both metals. Fulp correctly points out that gold is the only real money, and it is my safe haven and insurance policy against financial calamity. Calling gold a commodity or let its price fluctuate according toe market perception does not make gold a non-standard, but rather changes the natural state of the economy to the detriment of public interest. Although silver functions mainly as an industrial metal, it is strongly tied to the price of gold and is generally more volatile during upside and downside moves of the yellow metal. In times of financial distress and economic calamity, silver tends to behave more like a precious metal with widespread hoarding of gold trickling down. For this reason, it is often called the “poor man’s gold”. The gold-silver ratio lends valuable guidance to ascertain whether one metal is over – or undervalued with respect to the other. The rarity of a daily gold-silver ratio above 80 is evidence that silver is severely undervalued and is a strong buy signal for the metal.

2.5 Connection to Energy In today’s economy, gold is considered to be a commodity, whose value fluctuates just like any other metal, for instance, silver, platinum, palladium, copper, etc. Along with these metals, a range of other products, such as crude oil, gas, minerals, crop, and others are also traded with fluctuating values depending on the trading prices. It is interesting to note that gold is a precious metal and is rare whereas oil is the 2nd most abundant liquid (2nd to only water) on earth. Gold and oil, often dubbed as ‘black gold’ are the opposites of the resource spectrum and substituting one by the other represents a movement in the wrong direction in terms of scientific economic analysis. Also, gold mining cost remains relatively constant. Table 2.8. Shows a comparison of inherent traits of gold and oil (black gold). This table shows how these two natural products are diametrically opposite to each other. Table 2.8 Comparison of various traits of gold and oil. Factors Gold Transport cost Insignificant compared to gold value Abundance One of the rarest metal Mining cost Geographically stable Original Inherently high purity source with monolithic composition

Oil (black gold) Was higher than intrinsic cost until 1970’s price surge Second most abundant fluid on Earth. Geographically variable Inherently mixed with complex composition

98% Inherently recyclable recycled, 0% (98% recycled) recycled (except plastic) Reactivity Non-reactive to the environment (zero reactivity with air) Origin Symbolizes source product

Inherently consumable (except plastic that is harmful to the environment)

Decoration Inherently bright value Vulnerability Not vulnerable to destruction in explosion, flooding, earth quake Storage Easy to store, stable capability Refining Refining improves environmental sustainability Technology Constituent industries are not technology intensive Relation to Passive role economic activity Geopolitics Inherently apolitical and politicization is mostly a matter of perception

Inherently dark

Very reactive to the environment (low-temperature oxidation is continuous at any temperature) Symbolizes end product

Extremely vulnerable to explosion, leakage, and loss to the environment

Difficult to store, unstable Refining increases reactivity with the environment

Constituent industries are highly technical, requiring knowledge of economic issues Each economic activity is energy-driven, creating a pivotal role for the oil industry The sector is influenced by interactions at different levels (international, regional, national and even local), most of which go beyond the subject of one discipline.

As pointed out by Speight and Islam (2015), the energy sector is dominated by the oil industry literally enjoying the position of the driver of the modern economic system. Although analyses of energy problems have attracted inter-disciplinary interests and researchers from various fields have left their impressions on these studies, energy economics did not evolve into a specialization until the first oil shock of 1970s (Edwards 2003). The dramatic increase in oil prices in the 1973–1974 highlighted the importance of energy in economic development of countries. Since then, researchers, academics and even policymakers have taken a keen interest in energy studies and today energy economics has

emerged as a recognized branch on its own (Zatzman, 2013). Since price changes are of crucial importance for commodities, relationships between these two ‘commodities’ are often examined in detail to establish if prices of one commodity can fuel prices of another. If we ignore for the moment that gold is not a commodity and should not be traded for different values, we discover a strong relationship between prices of gold and silver, where the price of silver strongly depends on the price of gold. Similar trend is seen between gold and oil (‘black gold’). Figures 2.17 and 2.18 show variation of gold prices along with oil prices. Because oil is benchmarked with dollar and dollar was pegged to gold, before 1971, there is no difference in trends between gold prices and oil prices. After 1971, the driver of the gold price (which typically shouldn’t be fluctuating after inflation adjustment, anyhow) and the driver of oil prices have been decoupled. Even then, there is a general trend of rise and fall of gold and oil prices in tandem. This trend continues until the financial collapse of 2008, after which there has been a diverging trend in those two ‘commodity’ prices.

Figure 2.17 Variation in “gold dollar” and “petrodollar” (left y-axis represents gold price in $/oz, while right y-axis represents crude oil price (in $/bbl), from American Bullion, Inc.

Figure 2.18 Variation of gold price and oil price during 1987–2012, From American Bullion, Inc. The main idea behind the gold-oil relation is the one which suggests that prices of crude oil partly account for inflation. If gold were not treated as a commodity, it would be constant in its value whereas oil would be fluctuating. Increases in the price of oil result in increased prices of gasoline which is derived from oil. If gasoline is more expensive, then it becomes costlier to transport goods and their prices go up. The final result is an increased price level – in other words, inflation. The second part of the causal link is the fact that precious metals tend to appreciate with inflation rising (in the current – fiat – monetary environment). So, an increase in the price of crude oil can, eventually, translate into higher precious metals prices, as long as other factors are not significant contributors to the price change. The problem in the mainstream explanation is both oil price and gold price are explained with dogmatic logic. Each theory is built on an illogical and absurd premise and explanations are repeated with more illogical assertions. Missing in every analysis and theory is the comprehensive depiction of economic activities including connection between the energy inputs into an economy and a culture that primarily focuses on wealth accumulation as the primary incentive for economic activities. In order to introduce logic to this otherwise dogmatic analysis of economics, one must recognize the need to invoke fundamental changes in the way energy needs are looked upon. Then comes the need to analyze long-term outcome rather than focusing on quarterly growth – a mindset that is myopic and wrong-headed. The new synthesis is a necessity as the current monetary policy stays within two critical states of implosion, for which each state is touted as the panacea by the proponents and ‘point of no return’ by the opponents. While pundits have recognized each of these states as the point of criticality (Kunstler, 2013), few, if any, have recognized that both these critical points are on the same side of aphenomenality and can be avoided only by a fundamental shift in economic policies. Aristotle correctly observed the commensurability of goods in relation to need – the unit of value being need or demand. Need, rather than something in the nature of goods, is what makes them epistemically commensurable. Aristotle observes, however, that although need is capable of variable magnitudes, it lacks a unit of measure until money is introduced to provide it.

Ultimately, he concludes that it may be impossible for different goods and services to be strictly commensurable. In his idea of commensurability Aristotle was the first to identify a serious and authentic problem of economics. This problem was pragmatically solved in modern era by replacing the gold standard with other commodities, or worse, with currencies that had no intrinsic value. While Aristotle’s quest for setting justifiable ratios for fair exchange so that product could be treated as “commensurable enough” to permit the exchange was a genuinely scholarly one, today’s economists have left no room for such scholarly quest. All goods have been converted into must object of greed that is expressible as money and the ratios are quantitative and precise, all the meanwhile manipulating the definitions of ‘precision’ and ‘value’. In the Nicomachean Ethics, Aristotle states that exchange depends on equality of both persons and commodities. It is in this work that he concentrates on the problem of commensurability. Aristotle used artisans as examples for his general and abstract discussions found in this work. In Book V of the Nicomachean Ethics he deals with justice, is concerned with determining proper shares in various relationships, analyzes the subjective interactions between trading partners looking for mutual benefit from commercial transactions, develops the concept of mutual subjective utility as the basis of exchange, and develops the concept of reciprocity in accordance with proportion. In the Nicomachean Ethics, Aristotle, in his treatment of justice, applied the concepts of ratio and proportion to explain just distribution. Aristotle states that fair exchange is a type of reciprocity, not of equality, but rather of proportion. This is achieved by equalizing proportions of products. His concern here is with the ratios in which goods are exchanged. Individuals create products of different value and unequal creators are made equal through the establishment of proportionate equality between the products. This led Aristotle to the consideration of commensurability and to inquire into the notion of exchange values. According to Aristotle, value is assigned by man and is not inherent in the goods themselves. He says that exchange occurs because what the participants want is different from what they have to offer. This is the first known effort to convert perception into value for a commodity. Interestingly, Aristotle didn’t consider gold as a commodity. Today, gold has been rendered into a commodity and its value thus became a function of perception of the general public. Need plus demand is what goes into determining proportionate reciprocity in a given situation. Aristotle explains that the parties form their own estimations, bargain in the market, and make their own terms and exchange ratios. The exchange ratio is simply the price of things. For Aristotle, what is voluntary is presumed to be just, i.e., that exchange must be mutually satisfactory. He sees mutuality as the basis for exchange and the equating of subjective utilities as the precondition of exchange. There is a range of reciprocal mutuality that brings about exchange. The actual particular price is determined by bargaining between the two parties who are equal as persons and different only with respect to their products. In this analysis, it is presumed that there is an equitable relationship between trading partners and both parties have access to factual information of the product. In today’s corporate-driven economy, there is hardly any equitability between corporations and consumers.

For Aristotle, money is a medium of exchange that makes exchange easier by translating subjective qualitative phenomena into objective quantitative phenomena. While this has been always the purpose of money, meaning money acts as the agent of commensuration between two commodities so the shopper or trader doesn’t have to be burdened with historical evaluation of a product. Although subjective psychological want satisfaction cannot be directly measured, the approximate extent of want satisfaction can be articulated indirectly through money. However, this cannot involve greed or fear, as both of them have no real equivalence in tangible terms. As such, money can supply a convenient and acceptable expression for the exchange ratio between various goods. Money, as an intermediate measure of all things, is able to express reciprocity in accordance with a proportion and not on the basis of a precisely equal ratio. Money, according to Aristotle, has become a convention or type of representation, by which all goods can be measured by some one thing. However, money has become a modulating element and representation of demand in modern era, thus legitimizing greed and fear transformed into tangible values. 1 New science doesn’t use the term ‘morality’ but scientifically morality and ethics are the same (Islam, 2017). 2 Kashif S. Ahmed, “Arabic Medicine: Contributions and Influence,” in the proceedings of the 17th Annual History of Medicine Days, March 2008, Calgary (2008): 155. 3 This, in essence, reversed the notion of reality concept of Plato, who equated reality with the long-term. 4 Dictionary.com describes fungible as: “(esp. of goods) being of such nature or kind as to be freely exchangeable or replaceable, in whole or in part, for another of like nature or kind.” 5 Why is intrinsic value important even if something without it can be valuable subjectively? It goes back to the ‘moral economy’ Aristotle talked about: things have to be real in order to have long-term sustainability, and besides, the right and wrong standard we should be using for the economy is against this notion of value. Real value leads to sustainability, whereas artificial value leads to eventual unsustainability. 6 GitHub is a development platform that empowers individuals to retain their original style. From open source to business, one can host and review code, manage projects, and build software alongside millions of other developers.

Chapter 3 The Incompatibility of Conventional Economic Analysis Tools with Sustainability Models 3.1 Introduction Mark Twain said, “The lack of money is the root of all evil.” This has become the first premise of today’s religions. Ever since the yoke and diktat of the Roman Catholic Church was put in place, ‘religion’ has been formally conflated with the Money god that remains today as the undisputed almighty. Ramified with the moral compass of the Aquinian bible and its Churchapproved Aristotelian conception of the natural order, the Eurocentric view makes room for both the notion of truth as a spectrum and the notion of knowledge as an amalgam of truths and falsehoods. Consider the following brief catalogue of contemporaneous transitions: from Trinity to infinite god (desire being the god), from God, Son, and Holy ghost to Money, Sex, and Control, from Church, Monarch, and Feudal lords to Corporation, Church, and Government. Today, humanity is defined by only ‘Dark Triad’ variables (Machiavellianism, Psychopathy, and Narcissism). The humanity that defines today’s economics models are uniquely applicable for a culture disconnected from Conscience. In this culture, deceit holds the key to money – money for more money, sex, and control. This is true for all levels of modern life, ranging from personal life to global governance (Lee et al., 2012). Power, money and sex defines today’s economic system that drivers the society at large. No other singular event marked the existence of this unholy trinity more than the rise of Donald Trump to US Presidency. The rise of Donald Trump in US politics marks an important point in the history of the politics of economics. More importantly, events during Trump Presidency have demonstrated how hopeless economic theories and predictive tools are, in keeping with New science in general (Khan and Islam, 2016). The 45th president of the USA has been characterized as the most unpleasant character in USA, with epithets such as jealous, petty and greedy; an unforgiving narcissist; a vindictive ethnic cleanser; a misogynist; a homophobic racist; a serial liar; remorseless sexual predator; Islamophobic, sociopathic, megalomaniacal, demagogue; a capricious bully; and a self-serving nepotistic con artist without a shred of conscience. The anti-Trump chorus has control over all three components of the Establishment, namely the financial establishment, the political establishment and the corporate media establishment. Even the likes of former Republican president, G. W. Bush or Republican nominees, John McCain and Mitt Romney have added their voice against Donald Trump. Pundits that laughed at his nomination bid, congratulated themselves for predicting unprecedented defeat that they saw Donald Trump as facing in the November election then predicted total collapse of the US economy and the unprecedented rise in the gold price. When the opposite happened, they are now predicting the rise of Fascism unless Donald Trump is assassinated or impeached. ‘Never

Trumpers’ have been relentless in pursuing status quo that has become threatened by the rise of Donald Trump. Jaan Islam (2016) referred to Trumps election to presidency in these terms: “A year ago, people would believe that these words could have only been said by a delusional individual. Neither the political status quo, nor the Wall Street-based establishment could have imagined any such an event. While the Democratic National Committee, the Media, and even State Governments were conspiring to suspend Bernie Sanders’ anti-establishment revolution, Trump set his eyes on the Presidency.” Trump’s Presidency is an important event in the political economy of modern era. Clearly, the fact that Trump, a political outsider, defied traditional politics, outright rebelled against his own party, and won an election that was, in the eyes of the establishment, a ‘given win’ for Hillary Clinton continues to bother the establishment. The establishment that is dubbed as ‘deep state’ by Trump supporters, is convinced that Trump’s economic, trade, foreign, and immigration policies that defy the traditional boundaries of political practice, and economywise frighten major wall street bankers will bring down USA as we know it and must be saved from a ‘Fascist regime’. One would recall how 10 days before election, Clinton was already talking about who would fill the cabinet members and how she will be the president of all Americans. It was the Republicans that were cringing and rushed to save their seats by literally slandering Trump. Even a group of Nobel laureate economists got together and jointly expressed concerns over how disastrous a Trump win would become, predicting stock prices would collapse, gold prices would skyrocket, and America would become a laughing stock. When that prediction failed spectacularly (gold price falling by 10%, stocks rising to a record high, breaking the 20,000 ceiling), then the likes of Nobel Laureate economist, Joseph Stiglitz, continued the narrative, “There is a broad consensus that the kind of policies that our president-elect has proposed are among the polices that will not work” (Miller, 2017). Others joined in and predicted perilous future (Smith, 2017). The establishment had hoped that Trump’s agenda could not move forward because of the following reasons. First, the Congress, the Senate, and even the Supreme Court can be expected to maintain traditional political practice and therefore would stop Trump’s ‘caustic policies’. Secondly, as was the case with President Bush, inexperienced Presidents have long learning curves. This makes room for close advisors to shape the President’s policy ideology. Vice President Michael Pence, who represents a member of the Republican orthodoxy (i.e. establishment), can offer a much-needed reprieve in case Donald Trump can be declared ‘incapacitated’, invoking 25th amendment.1 The final hope, within the framework of democracy, would be through the Federal court system (all the way up to the Supreme Court) that will act as a major obstacle even to Trump’s executive orders. This is what we see is happening to the so-called ‘Muslim ban’. In stunning rulings, several federal courts have shown that decision makers on both sides on an issue being variously “wrong but right” or “right but wrong” largely as a result of partisanship, laziness, incompetence and corrupted morals. In any case, obtaining an interim injunction is a very far thing from winning a challenge

to the Executive Order. The Courts’ legal reasoning and logic in both recent interim judgments seems to be no less flawed than the Executive Order itself. U.S. District Judge Theodore Chuang ruled that the plaintiffs had standing and a likelihood of success on the merits of their claims, including claims that the executive order discriminated on the basis of religion. The nationwide preliminary injunction will remain in place indefinitely until it is either lifted by the Maryland judge or overturned by a higher court. Federal judges’ rulings prevented core provisions of the recent executive orders, dealing with immigration and travel ban, going into effect nationwide. While President Trump slammed the decision as “an unprecedented judicial overreach,” such outcome was predicted. It is clear that the establishment is not about to cede power back to people despite the revolutionary statement made by President Trump, on the day of his inauguration, when he said, “Because today we are not merely transferring power from one administration to another, or from one party to another – but we are transferring power from Washington, D.C. and giving it back to you, the American People.” So, even if Trump’s election does not have major immediate impacts, it is possible that this small disruption in the American and global political systems and economies will lead to major ruptures in the current status quo in the future. After all, this is the first time in USA a president is acting like a policy maker rather than a policy taker. Yet, all predictions have failed and if anything has been learned it is that modern economic theories and prediction tools have failed miserably. It is no surprise that Donald Trump gave the “Fakenews award” to the Nobel Laureate Economist, Paul Krugman, who wrote that the “economy will never recover” on the inauguration of President (Dedaj, 2018). Of course, the failure of predictive tools does not imply that a healthy economy has been restored. The stock market continues to be one of President Donald Trump’s favorite indicators of the country’s health as stocks continue to hit all-time highs. However, a shrinking share of Americans are getting rich off the market’s dizzying rise, according to a recent analysis. The top 10% of American households, as defined by total wealth, now own 84% of all stocks in 2016, according to a recent paper by NYU economist Edward N. Wolff (Wile, 2017). Wolff (2017) wrote, “Despite the fact that almost half of all households owned stock shares either directly or indirectly through mutual funds, trusts, or various pension accounts, the richest 10% of households controlled 84% of the total value of these stocks in 2016.” This number represents a big change from 2001, when the top 10% owned just 77% of all stocks. Furthermore, while virtually all (94%) of the very rich reported having significant stock holdings—as defined as $10,000 or more in shares—only 27% of the middle class did (Figure 3.1). The study framed that middle class as the group between the poorest 20% and the richest 20% of Americans. The concentration of stock holdings among the rich, Wolff states, is due to the twin stock market busts of 2001 and 2008. While the middle class was scared off by these declines wealthier investors were able to swoop in and increase their holdings. We reconnect to repeated failed models of the past.

Figure 3.1 The rich are benefitting the most from the stock market’s historic run (Wile, 2018). In this chapter, economic models are examined in order to demonstrate that the shortcomings of new science are mirrored in economic models, thereby creating a trap, often attributed to Einstein, who famously stated that “we can’t solve problems by using the same kind of thinking we used when we created them.” Because economics is the driver of engineering, it becomes impossible to design, let alone evaluate, any sustainable project with the field of economics that recognizes the concept of ‘zero-waste’2 (as distinct from waste minimization) and sustainable petroleum development (as distinct from renewable energy development) as absurd concepts, often coloring them as an oxymoron. In its own right, economic models all collapse when a zero-interest rate is invoked (just like zero waste in engineering) or gold is used as a standard (as opposed to a ‘commodity’). This chapter shows how there must be a paradigm shift in economic models that finally move away from age-old models that sprung from the ‘original sin’ models only to morph into a ‘selfish gene’ model through a transition of the concept of ‘selfish man,’ making both the ‘right’ and ‘left’ belong to the same side of aphenomenality. It is shown that just like zero-waste and engineering of total sustainability, economic models must bring back the notion of inherently productive mode in place of producing ‘value’ out of nothing. The chapter points out how all existing economic models suffer from spurious assumptions and thus incapable of capturing the reality of true sustainability. All models regurgitate the centuries old notions and fail to recognize the true potential of the Information Age. This chapter shows how the Information Age has made it both necessary and possible to account for intangibles that eluded previous scientists, engineers, and economists. The chapter shows how Information Age opens up opportunities to move away from the constraints of capital-intensive investment models – so much so that financial institutions have little role to play in the development of sustainable projects. This is the antithesis of the Financialization model that has been all but declared the only solution to economic crisis and has been at the center of spiraling down global economy.

3.2 Current Economic State of the World Plato said, “Strange times are these in which we live when old and young are taught falsehoods; And the one man that dares to tell the truth is called at once a lunatic and fool.”

Few question the notion that these ‘strange times’ are now when it comes to politics. However, even fewer understand the science behind these ‘strange times’, and even fewer appreciate how such times have pervaded all aspects of our civilization, and practically no one sees this as a problem in the science and technology development sector. Many dislike the current system but few see the big picture and the direction that our civilization is moving and none can tell us how to fix the system. A common slogan of every epoch is that we are on a path of moral, ethical, and overall degeneration. For this millennium one such warning was sounded by Adams and Jeanrenaud (2008), who wrote: “The new millennium started with a profound wake-up call. Over the past eight years scientists worldwide have provided policy makers with some daunting facts, which taken together present an alarming picture of the future. In 2005 we learned that nearly two – thirds of the world’s eco systems – our life support systems – are degraded and being used unsustainably, leading to irreversible damage in some cases. In 2007 we learned that the evidence for climate change, resulting from carbon dioxide emissions from human activities, is now unequivocal, with potentially catastrophic results. We are also nearing a period of peak oil, the point at which the maximum rate of global petroleum production is reached, after which supplies decline and prices rise, with profound implications for the global economy.” Over a century ago, Nobel laureate scientist Svante August Arrhenius (1859–1927) also sounded the alarm of unsustainability. While his science of CO2 induced catastrophe caught on, few noticed he was also a board member for the Swedish Society for Racial Hygiene (founded in 1909), which contributed to the topic of contraceptives around 1910. His understanding of humanity was primitive but no one took notice. Albert Einstein, the icon of the previous millennium, observed (in a letter to Dr. Otto Fullisburger, 1946) the “horrifying deterioration in the ethical conduct of people today … Thecause of disastrous bi-product of the development of scientific and technical mentality”. An oriental proverb re-iterates this emotion as, “Enough for the whole world, but not enough for one greedy person”. Natural resources appear to be so mismanaged that everyone is panicking if the current level of consumption can be sustained. Some two decades ago, this emotion was echoed by a friend of the environment, who wrote, “Most global environmental degradation is caused by a small minority of the world’s population – the 20 per cent of mainly Western people who consume 80 per cent of the world’s resources. The planet cannot sustain current total levels of consumption, let alone 5 billion people, or almost 10 billion within 60 years, consuming as Westerners do now.” (Boyle, 2000).

3.2.1 Economic Extremism In the contemporary world, capital-centredness is ultimate source of the problems unleashed by an economic order based on oligopolies, monopolies and cartels all loyal to the same doctrine that has been the simple most important source of disparity in the world. The main

features of a properly delinearized history of how we have arrived at this point goes something like this: The advent of property laws made “legal” after the fact what had actually been acts of misappropriation. In nineteenth-century America, following the Civil War, specifically in order to protect and encourage corporate property, this “right” to retain control or ownership of any form of property – especially property already accounted as a business asset (whether it originated as a natural resource or as a claim on someone else’s labour) but acquired without “colour of right” (i.e., before there existed any law specifically defining or dealing with its legal existence as property: Latta 1961) – was consciously elaborated as an exception to the Rule of Law. As the result of wars and other struggles waged to protect this corporatized form of property and the technological development that stemming from it – including associated long-term toxic effects – the world now finds itself in an environmental crisis (Rich 1994). The deepening of this crisis is marked by a simultaneous extension of corporate abuse of Humanity’s rights of access to fresh air, clean water and other absolute necessities, alongside a growing rebellion by the human productive forces sustaining these corporations as their market, against government accommodation of the abusers and their abuses. Today, water and air have become commodities. Governments and corporations now own access to water. Overuse by industry and agriculture have made it into a scarce commodity over which future wars will be waged. Contaminated by industrial and agricultural run – off, the sale of bottled water, or home filters to the public who can afford it, is promoted as ‘uncontaminated’. Pollution itself, created by chemical poisons released into the air by industry, agriculture, and the automobile, has become a money-making commodity – with the sale of pollution ‘credits’ from one polluter to another. Home filters to ‘clean’ the air in homes and public buildings, promote clean air, again for those who can afford it, autos in many places are required to have catalytic converters to filter out poisons emitted from the burning of fuel, to keep down air pollution. At the same time, corporate activity, with its virtual immunity from prosecution, legal sanctions or legal responsibility in its home bases and main markets, is purchased and maintained by trying to dump unwanted wastes in various parts of Africa and Asia. This is arousing more and more people in the rest of the world against corporate fiat and dictate, energizing in its wake a rapidly widening discussion of alternative arrangements for Humanity’s continued existence on this planet. Accordingly, the intention to control nature has become the last remaining pathway by which corporations hope to ensure a constant, neverending stream of profit – and a battleground on which the fate of Humanity for generations to come may be decided. The current economic world order has led amassing the wealth that can be best described as ‘obscene’ (Islam et al., 2015). Figure 3.2 summarizes the extreme nature of the world economy – the driver of today’s civilization. Oxfam reported that 388 billionaires had the same amount of money as the bottom 50% of the Earth’s population in 2010. The charity’s report also said that the richest 1 percent of the population will own more than half the world’s wealth by 2016. The number of people whose wealth is equal to that of the poorest half of the world’s population since 2010 has declined steadily (Figure 3.3).

Figure 3.2 Past and future projection of shares of global wealth of the top 1% and bottom 99% (Oxfam Report, 2018).

Figure 3.3 Past shares of global wealth of the top 1% and bottom 99% (from Khan and Islam, 2016). Oxfam report states that a revision was possible and necessary due to new data made available from the bank Credit Suisse. Under the revised figures, 42 people hold as much wealth as the 3.7 billion people who make up the poorer half of the world’s population, compared with 61 people last year and 380 in 2009. This is a significant and catastrophic rise from previous years statement that eight billionaires held the same wealth as half the world’s population. Oxfam also reported that the wealth of billionaires had risen by 13% a year on average in the decade from 2006 to 2015, with the increase of $762bn (£550bn) in 2017, marking an amount that would be seven times over what’s needed to eradicate world poverty. The latest is the richest 1% own 82% of the global wealth (Suliman, 2018). Four out of every five dollars of wealth generated in 2017 ended up in the pockets of the richest one percent, while the poorest half of humanity got nothing, as per Oxfam report published in 2017. Ironically, the USA, the most powerful economy of the world itself suffers from extreme poverty in many levels. A recent UN special monitor’s report was produced by Philp Alston,

the UN’s special rapporteur on extreme poverty and human rights (Pilkington, 2017). His factfinding mission into the “richest nation the world has ever known” has led him to investigate the tragedy at its core: the 41 million people who officially live in poverty. Of those, nine million have no cash income and receive nothing in welfare. This unflattering picture of America sums the economic extremism of the Information Age. Figure 3.4 summarizes the nature of economic extremism. The share of the top 0.1% wealthy people shows an exponential rise soon after Nixon delinked U.S. dollar from gold reserve. In 1976 the richest people had $35 million each (in 2014 dollars). In 2014 they had $420 million each — a twelvefold increase. You can be sure it’s gotten even more extreme since then.

Figure 3.4 Economic disparity and movement of wealth in USA. During this period wealth has been concentrated so much in the hands of top 0.1% that they could spend $20 million every year and they’d still just keep getting richer, forever, even if they did absolutely nothing except choose some index funds, watch their balances grow, and shop for a new yacht for their eight-year-old. This trend shows complete take over greed over need as the driver of economy. Figure 3.5 shows on the same graph both top and bottom extremes of U.S. economy. The bottom 90% of Americans aren’t even visible on this chart — and yet it’s a very tall chart. The obsceintity of this inequality is manifested by the fact that American households’ total wealth is about $95 trillion, meaning three-quarters of a million dollars for every American household. However, roughly 50% of households have zero or negative wealth.

Figure 3.5 Economic disparity in USA Bottom (visible) pink line is the top 10% (original data from Credit Suisse, 2014). Such inequality is inherently implosive and that concentrated wealth is equivalent to waste or transfat of a person with obesity. Just like such body fat leads to all sorts of ailments, wealth concentration leads to collapse of the economy. strangling our economy, our economic growth, our national prosperity. Consider the fact that people with a great amount of money don’t have the ability to spend. The money that could have circulated in the society adding to the real value and leading an economic growth is now sitting idle. More money people accumulate individually slower the economic growth becomes. In this process, the number of people

doesn’t matter, what matters is how much money is in the hands of the richest 0.01%. Table 3.1 Shows spending for different groups of Americans. Table 3.1 Spending in various households (from Roth, 2017).

House quintiles by income level, 2013 Lowest Second Third 20% Fourth 20% Highest 20% 20% 20%

Average Wealth Average Spending Spending as a% of Wealth

86,100 37,384 43%

112,700 54,099 48%

168,600 70,542 42%

333,600 97,790 29%

3,307,900 164,758 5%

Under this scenario, the bottom quintiles turn over 40% or 50% of their wealth every year. The richest quintile turns over 5%. For a given amount of wealth, wider wealth dispersion means more spending (Roth, 2017). The top 0.01%, on the other hand, have about $5 trillion between them. Consider what even half of that wealth would do if it were distributed around the population. The very motivation of economic development is to bring the society to a social equilibrium, to a homogenous state. Any departure from that process is a recipe to disaster. This is the core of unsustainability in U.S. Economy that is considered to be the model by the rest of the world. What such an economy needed is government intervention in order to restore social justice. The opposite has happened. Every government intervention has made situations worse. Douglas French points out in Foreword of 2009 edition of Mises’ (1912) book: Theory of Money and Credit: “These developments would not have surprised Mises. In discussing “The Freely Vacillating Currency” he wrote that the United States was “committed to an inflationary policy,” and except for the “lively protests on the part of a few economists” the dollar would have been on its way to being “the German mark of 1923.” Indeed America’s debts at this writing exceed those of Germany in 1923 – even relative to the size of the U.S. economy, author and financial commentator Bill Bonner writes, in fact 100 times greater.” (p.3) U.S. Government has become an accomplice of the ‘crime’ committed against the general public. In old days, conservative values meant smaller government in order to maximize civil liberty and democratic values meant larger government in order to help homogenize social status. These values have changed for both sides while both joined in to increase the size of the government that actually help creating greater divergence between the rich and the poor (Figure 3.6).

Figure 3.6 National debt increase during various presidencies. When President Donald Trump initiated the ‘largest tax cut’ program he was thoroughly criticized and a comparison with President Reagan was made. Reagan recovery was attributed to falsely attributed to tax cut. The reality was, that recovery was because of increase in government spending and lower interest rate (undoing the misery index of President Carter). Throughout history there has never been such a careful effort to distribute taxation in such a way that the economy moves as President Trump’s tax cut. For instance, mid-income households get money and we know they spend it all (meaning economic growth). Then President Trump gives another break to higher income households but not really because with break they drop back to Reagan era. These household are also like to spend, so that’s good news. Then Trump stops with the super rich households, who get little tax cut on their declared incomes. As we know these top earners never give themselves enough salary and they are the ones that are obsessed with saving or making money out of nothing. Sure enough, not an extra penny will go to their pocket. Compare that previous administrators. President Clinton raised tax from 31% to 39.6% for over $250,000,. His was the peace dividend era and the whole benefit went to whom? Government certainly did little to homogenize the economy. Bush 43 cut tax rate to 35% and also capital gains and dividends. As a result government grew so did defense contracts, insurance company profits and big pharma. In 2012, Obama restored the 39.6% tax, and he went down as the worst GDP president of modern time. Here is the complete list of average annual real GDP growth by postwar president (in descending order):

Johnson (1964–68), 5.3% Kennedy (1961–63), 4.3% Clinton (1993–2000), 3.9% Reagan (1981–88), 3.5% Carter (1977–80), 3.3% Eisenhower (1953–60), 3.0% (Post-WWII average: 2.9%) Nixon (1969–74), 2.8% Ford (1975–76), 2.6% G. H. W. Bush (1989–92), 2.3% G. W. Bush (2001–08), 2.1% Truman (1946–52), 1.7% Obama (2009–15), 1.5% Figure 3.7 shows the GDP trends for these US presidents. This figure shows that President Johnson’s era showed the highest GDP while President Obama’s showed the lowest. This confirms that the GDP growth cannot be attributed to a particular party.

Figure 3.7 Dept. of Commerce, Bureau of Economic Analysis. In the core of all economic theories is a fundamentally flawed premise that led to deification of Money and robotization of humans. This premise was recently summed up by Tim O’Reilly (2016): “Here is one of the failed rules of today’s economy: humans are expendable. Their

labor should be eliminated as a cost whenever possible. This will increase the profits of a business, and richly reward investors. These profits will trickle down to the rest of society.” Corporatized economics, the driver of the modern era, has become anything but economizing. In a manner similar to how physics has been turned into ‘science of the artificial’ diverging ever further away from its root meaning of ‘science of nature’, every new product rollout consciously disguises or conceals the extent to which today’s “new and improved” versions are either potentially or actually more wasteful than what they are replacing. The plain fact is that “the whole truth and nothing but” does not necessarily sell, so falsehoods to some greater or lesser degree must be promoted. These falsehoods are frequently disguised as what the courts and regulatory agencies euphemistically call “exaggerated claims”.

3.2.2 Paradoxical Economy in the Era of Artificial Natural Economics is supposed to be a bottom-up process, in which the general public drives the economic system and government makes sure each of its decisions is a result of the general public’s collective wish. Instead, what we have in last century until now is a process that has rendered the bottom up system into a top down process, in which decisions are made by a third party that controls the government policies and tilt every policy to benefit the few elite that control the government, financial establishment and the corporate media, leaving behind the general public totally detached from the control of the economic system. Philosophically, what happened to morality, i.e., detachment from conscience, has happened to the society collectively. We have a trickle down economy that is touted as the solution and the only options people get to choose from both involve benefiting the richest elite. We have a progressive degeneration from natural to artificial, creating paradoxes. Modern age is synonymous with paradoxes. Everything surrounding us points to contradiction upon contradiction. If the philosophy of dogma introduced the contradiction of mortal and immortal entity in one body, we are experiencing the worst of dogmas in today’s society. We made unprecedented progress in technology development, only to hear from Nobel Laureate Chemists (e.g. Robert Curl) that ours is a ‘technological disaster’. We claim to have progressed from dark ages of savagery and lack of empathy to modern enlightenment – only to hear from some of the most ardent supporters of modern European mindset of Capitalism (e.g., Pope Francis I) that unfair economic structures that creates huge inequalities are actually an act of terrorism. We claim to have reached the pinnacle of human equality and democratic values only to hear from Nobel Peace laureates (e.g., U.S. president Obama and the Egyptian politician Mohamed el-Baradei) that such rights only belong to a small group as the enlightened world is fed the news of a military coup in the form of the news headline “Kerry Lauds Egypt Military for ‘Restoring Democracy’.” We continuously hear about the threats to human existence that loom on us because of the process that we have followed as a human race. In this process, modern society has become an expression of contradictions, as depicted in Figure 3.8.

Figure 3.8 Our current epoch is an epic failure of intangible values. What we call the “Era of the Artificial” – synchronized with the introduction of synthetic plastic over a century ago – observes that the world has turned into a ‘technological disaster’ (Chhetri and Islam, 2008). Others call this era the Anthropocene, which refers to the current geological epoch of extensive human modification of ecological and geological processes (Crutzen, 2002). Figure 3.9 shows how the same era is connected to industrialization, which has become the driver of population explosion, concentration of urban population, and unfortunately total GDP increase. Because GDP is considered to be the most useful metric of economic growth, a negative correlation between economic growth and environmental degradation becomes apparent. Such a correlation creates an illusion that environmental restoration or long-term sustainability is economically imprudent and is in collision with technological progress.

Figure 3.9 The change in human collective activities from 1750 to 2000 (from Adams and Jeanrenaud, 2005). Similarly disturbing correlations arise when one looks into mortality by cancer, in which case

cancer represents the inevitable outcome of a lifestyle that has seen increasing use of artificials. Figure 3.10. shows the mortality rate for lung cancer in Sweden (Hallberg, 2002). The raw data shows that the ‘natural’ death rate increased from about 5 per year in 1912 to 10 in 1954. From 1955 it increased to 35 in 2000, showing exponential growth during this period. As this trend continues, rather than turning to a natural lifestyle, a reverse correlation between the mortality rate and healthcare spending is sought (Pritchard and Hickish, 2011), and at the same time, the profit margin of the pharmaceutical companies skyrockets.

Figure 3.10 Lung cancer mortality rate in Sweden Cancer Trends During the 20th Century (Hallberg, 2002). Figures 3.11 and 3.12 show similar rise in obesity and the occurrence of childhood diabetes, respectively.

Figure 3.11 Rise in obesity (OECD analysis of health survey data, 2011).

Figure 3.12 Incidence of diabetes in children under age 10 years in Norway, 1925–1995 (from Gale, 2002). The failing of the current civilization stems from the fact humanity itself is not defined. If we do not know what makes us human, we are likely to subscribe to a theory of humanity, i.e., the purpose and way of human life, that leads to the transition depicted in Figure 3.13. In other words, in the development of a sustainable economic theory in line with both human and mother nature (which is the most sustainable system for the long-term) requires an understanding of the meaning of humanity, or answering the question: “what is human?”

Figure 3.13 Robotization of humanity: the current state of the world. Currently in the “Age of the Artificial,” the notion of ‘perfection’ of humanity has been conflated with artificial perfection, or trying to look like Barbie doll, do math like a robot, be fast like a computer – while forgetting that none of them is a standard for humanity (Islam et al., 2017). In 2018, Saudi Arabia offered citizenship to a robot (Alloway, 2017) made in the USA, but refuses to afford the same distinction to a fellow human, born in Saudi Arabia, unless parents are Saudi nationals. This is the same country that revokes the citizenship of a Saudi national who decides to be a citizen of another country and bans him/her from re-entering Saudi Arabia. The USA is not trailing either. In USA, a citizen can have upwards of 10,000

girlfriends/boyfriends, but polygamy is banned. It is also criminal to marry one’s first cousin in dozens of states in the USA, whlie same sex marriage is legal in every state. As for the real ‘enlightened’ ones – the ones even liberal Muslims call ‘close to true Islamic Caliphate’3, marriage among siblings is allowed, or one can sue the government for not allowing so. At the same time, having children with a married spouse is not fashionable. In Iceland, 67% babies are born outside of marriage (Weir, 2017). In the current epoch, the biggest loser has been the wellbeing of humans that got caught in the paradoxes of modern, artificial life. The biggest industry is the ‘defense’ sector that promotes war to ‘restore peace’, the second biggest industry is the pharmaceutical sector that promotes medicines. There are trillions of dollars of medicines available in the market, yet there is not a single medicine that cures4 a single disease (Islam et al., 2015). Figure 3.14 shows the evolution in defense budget of the USA. Every war has given a bump to the defense budget. While such increases would be natural in a normal situation, it becomes artificial if the war is started with a false pretense. For the Vietnam war, the Gulf of Tonkin scandal made it clear that the war was started with a false pretense5. Of course, the most blatant example of such false pretense was the Iraq war that was started with two false pretenses, both of which were planted by the CIA. Another point relates to President Ronald Reagan’s military buildup through the so-called star war program. As we will see in follow up discussion, Reagan achieved this feat in expense of national debt and not through savings made out of downsizing the government. After the collapse of Soviet Union and virtual political monopoly of the USA in the world stage, the so-called peace dividend kicked in during the Clinton era (1993–2000). The trend ended with the 9/11 terror attack that triggered incessant wars – all of which were with false pretense. The new trend of skyrocketing defense budget peaked after the 2008 financial collapse. During the same period, USA fell in recession that continued for number of years and significant bailouts failed to resuscitate the ailing economy. Even though the latest trend appears to be stagnation, the real picture is not promising as significant portion of the budget was actually due to designation of certain items and increasing number of defense contractors that were not included in the defense budget (Jeffrey, 2015).

Figure 3.14 US defense spending in recent history (From John Fleming/The Heritage Foundation, 2018). Over the years, another component has brought down economic efficiency. That is the unprecedented profiteering of the defense contractors. As early as 1984, Arthur Schlesinger Jr. wrote in his New York Times article, titled: Closing down the Pentagon Follies,

“General Dynamics, it appears, has charged the Pentagon – that is, the American taxpayer, you and me – $7,417 for an alignment pin that the humbler among us buy for 3 cents at the corner hardware store. McDonnell-Douglas charged $2,043 for a six-sided nut that costs 13 cents at the store. Pratt & Whitney charged $1,118 for a 22 cent plastic cover for stool legs. Hughes Aircraft charged $2,543 for a $3.64 circuit breaker. And in each case the Defense Department meekly handed our money over to the corporate highwaymen. The Boxer testimony represents only the latest installment in the long – running Pentagon scandals. Last spring, an enterprising Representative, Berkley W. Bedell of Iowa, went to the hardware store and bought 22 tools in a military repair kit. He paid $92.44. The taxpayers via the Pentagon paid $10,186.56 for the same tools. One doesn’t know which is worse: the extortionate contractors or the complaisant Administration.” This state of the US politics was captured by President Donald Trump during the entire presidential campaign as well as by Senator Bernie Sanders (who started the theme ‘the system is corrupt’ citing corruption in financial establishment, political establishment and corporate media establishment) and has motivated millions of Americans in losing trust in the political system as along with the mainstream media (Islam, 2018). There is little doubt that Trump was correct in asserting that US corruption has polluted the entire world and must be stopped at its track (Chayes, 2017). The ‘corruption’ is spearheaded by disinformation campaigns that turn information into propaganda. This disinformation process is an industry all by itself and formalized by the process of lobbying. Cohen (2015) points to the fact that the Pentagon’s top contractors sent an army of more than 400 lobbyists to Capitol Hill in one spring to press their case for increasing the nation’s spending on military hardware, in a massive effort costing tens of millions of dollars of their own funds from April to June of 2015. The contractors were upset in part because most military spending had been capped for the past few years under budget controls meant to rein in government debt. So far, the caps have forced a decline in main defense budgets from about $528.2 billion in fiscal 2011 to $496.1 billion in fiscal 2015, instead of a previously projected increase to roughly $598 billion. Mounting frustration with the caps was evident in the administration’s submission that year of a military budget that exceeded the limits by about $38 billion, followed by moves by both branches of Congress to add even more billions. This process of transferring back and forth huge sum of money had nothing to do with public interest or public good. At the end, in September 2017, Senate passed a defense budget for 700 billion, far more than what President Trump asked for. President Trump asked for an increase of $54 billion and he was given $80 increase instead. Lost in this insanity was the fact such spending could resolve many economic problems that face the USA. Emmons (2017) pointed out how merely the increase in defense spending could make the entire college education free. One would recall that Senator Bernie Sanders drew ridicule when during the 2016 presidential campaign, he pledged to make tuition free at public colleges and universities. His critics were from both parties that were convinced that such an ‘insane’ measure would bankrupt US economy. The trend continued and clearly only time both Democrats and Republicans agreed to support President Donald Trump and beyond is when it came to bolstering defense spending all the way up to an unprecedented $1.3 trillion. To put in perspective, back in 2015 lobbying expenditures by the 53 top defense contractors that

reported paying for such work in the second quarter of 2015 were more than 25 percent higher than the amount they spent in the same quarter of 2014 — $58.5 million instead of $45.7 million. Likewise, Boeing and General Electric had the largest increases in lobbying spending compared with the same period in 2014, among the 15 defense contractors that spent $1 million or more to lobby in the quarter. General Electric almost tripled its lobbying spending compared with the earlier period, from $2.8 million to almost $8.5 million. Boeing more than doubled its spending for the quarter, from almost $4.2 million in the second quarter of 2014 to $9.3 million in the most recent quarter of that year. Of the total 655 lobbyists employed by the contractors, 423 of them specifically lobbied on defense, in some cases along with other issues, according to the lobbying reports. This trend is collective version of false advertisement on personal level. It continues to other sectors as well. General Dynamics paid for 74 lobbyists, more than any other contractor, for example, and 70 of these lobbied on defense, part of its $2.7 million lobbying tab. Lockheed Martin Corp., the world’s largest defense contractor, spent $3.5 million and enlisted 64 lobbyists to press government officials, including 56 who lobbied on defense as well as other issues. The second biggest drain on the state of economy is the profiteering of the pharmaceutical industry. Figure 3.15 shows how the healthcare cost has been rising throughout modern time. Even though it is generally asserted that US government does not spend (as percentage of GDP) as much on healthcare as other countries and certainly the US healthcare system is not considered to be at par with other developed countries in terms of accessibility, Figure 3.15 shows that percentage GDP spent on healthcare is 50% more for USA than the closest rival (Switzerland) in 2015. In fact, the difference between USA and other countries has been increasing, showing the out of control nature of healthcare cost burden of US public.

Figure 3.15 Healthcare cost as a percentage of GDP for various developed countries (OECD Report, 2017). Such cost can be attributed to consistent increase in ‘maintenance’ drugs, as can be seen in Figure 3.16. Figure 3.17 shows the lion share of profits go to pharmaceutical companies, even though the common perception is, it’s the biotech companies that are profiteering.

Figure 3.16 Price variation in diabetes treatment chemicals (OECD Report, 2017).

Figure 3.17 Aggregated revenues reported by pharmaceutical and biotechnology companies from 1991 to 2014 (OECD Report, 2017). Note that by the last decade, and especially since 2010, almost all of the growth in the drug business has come from the biotech companies, with pharmaceutical companies reporting flat revenues between 2010 and 2014. In 2014, biotech companies accounted for 30.63% of total revenues at drug companies, up from 19.23% in 2010. Historically, public health matters became a state affair only during the modern age. It is synonymous with the emergence of the pharmaceutical industry as the only tool for healthcare. This same period has seen the medical profession turn from ‘treating ailment’ to prescribing medicine as the sole method of healthcare. This followed the debate between universal healthcare and private healthcare and the possibility that there is a problem with the pharmaceutical industry is dissipated. As early as in 1960s, the federal government in U.S. became heavily involved in ‘modernizing’ the healthcare system. President Lyndon Johnson with his “Great Society” movement, established public health insurance for both senior

citizens and the underprivileged and Medicare and Medicaid were born. The original expressed intent was to grant certain groups of Americans access to adequate healthcare services. Ever since, the liberal left has lobbied for more government spending with a view to socializing medicine, so universal access to healthcare can be assured. On the other hand, the conservative right has argued that universal healthcare would erase the quality of healthcare, implicitly assuming that the society simply does not have the means to ensure quality healthcare for every citizen. Missing from this debate is the notion that the pharmaceutical industry has engaged in an exuberant amount of profiteering, much through the government sponsored programs, such as mandatory vaccination through the public schooling program. Whenever any concern was raised as to the effect of such vaccines, there was no shortage of ‘scientific’ evidence crushing any notion that perchance there would be a correlation between various diseases with such vaccine program (Islam et al., 2015). Table 3.3 shows the number of occurrences of Autism in USA during the period of 1970 to 2017. There has been an exponential growth in the occurrence keeping pace with an increase in number of vaccinations administered to children (mostly through public funding). During the same time, the profit of both the insurance companies and pharmaceutical industry has skyrocketed. Table 3.3 Occurrence of Autism (data from CDC, 2017). 1970 1975 1985 1995 2000 2004 2006 2008 2012 2017

10,000 5,000 2,500 500 150 125 110 88 68 36

Curiously, during Bush 42 era, the U.S. federal government took up an initiative to provide patients with an explicit list of rights akin to ‘bill of rights’. This initiative meant to standardize the quality of healthcare, albeit for people who can afford medical insurance. Many interest groups from both left and right opposed the intiative, the most vocal opposition coming from the American Medical Association (AMA) and Big. As a result of this intense opposition, the initiative eventually failed to pass Congress in 2002. Figure 3.18 shows the percentage of children (aged 0 to 17) who are living in relative poverty, defined as living in a household in which disposable income, when adjusted for family size and composition, is less than 50% of the national median income.

Figure 3.18 Percentage of children (aged 0 to 17) who are living in relative poverty, defined as living in a household in which disposable income, when adjusted for family size and composition, is less than 50% of the national median income. https://www.theguardian.com/usnews/2017/oct/19/big-pharma-money-lobbying-us-opioid-crisis.

3.3 The Status of the Money God Mark Twain said, “The lack of money is the root of all evil.” This has become the first premise of today’s religions. Ever since the yoke and diktat of the Roman Catholic Church was put in place, ‘religion’ has been formally conflated with the Money god that remains today as the undisputed almighty. Ramified with the moral compass of the Aquinian Bible and its Churchapproved Aristotelian conception of the natural order, the Eurocentric view makes room for both the notion of truth as a spectrum and the notion of knowledge as an amalgam of truths and falsehoods. These are not ordinary blends of truth and falsehood. Rather, this is falsehood with a purpose. That purpose is to maximize profit and other short-time gains for a tiny minority. This would manifest throughout history, albeit being under disguise of secularism at later stages. As we would see in follow up chapters, this addiction of short-term gains is inherent to

modern society in the name of Utilitarianism. In this process, there is no separation of ‘church’ and ‘state’ when it comes to policy and money laundering schemes. Every policy, political or ‘religious’, is rooted in the dogma: “There is no god but Money, Maximum is the Profit.” It is not a coincidence that the Roman Catholic Church is dubbed as the World Capital of Money laundering. Manhattan (1983) made the point with the following comment: “The Vatican has large investments with the Rothschilds of Britain, France and America, with the Hambros Bank, with the Credit Suisse in London and Zurich. In the United States it has large investments with the Morgan Bank, the Chase-Manhattan Bank, the First National Bank of New York, the Bankers Trust Company, and others.” The Vatican has billions of shares in the most powerful international corporations such as Gulf Oil, Shell, General Motors, Bethlehem Steel, General Electric, International Business Machines, T.W.A., etc. Some idea of the real estate and other forms of wealth controlled by the Catholic church may be gathered by the remark of a member of the New York Catholic Conference, namely ‘that his church probably ranks second only to the United States Government in total annual purchase.’ The scheme of calling Saul of Tarsus ‘the best apostle’, creating December 25 the birthday of the ‘son of God’, ‘Good Friday’ the death anniversary, “Easter” the rebirth day, developing the notion of original sin all created Money for the Roman Catholic church and anyone that followed that model. Islam et al. (2015) demystified much of this process. Overall, there is a direct correlation between profit margin and every technology that has been touted as a panacea to modern problems encountered in our lifestyle. This has become a bipartisan issue with both liberals and conservatives displaying their hypocrisies. Liberals don’t admit that they worship Money God (with the trinity of Money, Sex, and Control), thus they are hypocritical. Conservatives, on the other hand, worship the same Money god, but proclaim that they worship the real God, thus they are hypocritical. The recent implosion of ‘values’ through countless sexual scandals has made it painfully clear that the addiction to Money god (with its evil Trinity of Money, Sex, and Control) is well and alive among both liberals and conservatives. Following names have surfaced connecting celebrities with sex scandals in the #MeToo era.

Morgan Spurlock Tavis Smiley NFL Network Analysts Marshall Faulk, Ike Taylor and Heath Evans Ryan Lizza Mario Batali James Levine Matt Lauer Charlie Rose Glenn Thrush Russell Simmons Jeffrey Tambor Sen. Al Franken Matt Zimmerman Andrew Kreisberg Roy Moore Louis C.K. Steven Seagal Ed Westwick Brett Ratner Dustin Hoffman Jeremy Piven Michael Oreskes Kevin Spacey Mark Halperin George H.W. Bush Terry Richardson Leon Wieseltier James Toback John Besh Bob Weinstein (Five days after blasting his brother, Harvey Weinstein, as a “very sick man”

and a “world class liar,” Bob Weinstein was accused of making repeated romantic advances to a showrunner and refusing to take no for an answer. It was first reported by Variety.) Oliver Stone Roy Price Ben Affleck Harvey Weinstein Even the academic world has been shaken as it comes to light many university professors have been “infected” by the “Weinstein bug” (O’Reilly, 2017). Of course, by now we also know that the bipartisan addiction to Sex god is also ubiquitous among genders. Numerous female teachers have been arrested for criminal sex offenses and the list is growing as psychologists try to grapple with the fact that addiction to sex was supposed to be a ‘male thing’ (website 1, website 2). In the meantime, no one is spared, including judges (website 3).

3.4 The Current Economic Models After the financial collapse of 2008, it has become clear that there are inherent problems associated with the current economic world order. The modern civilization has not seen many economic models. In fact, the work of Karl Marx was the only notable economic model that offered an alternative to the only other model available to the Europe-centric culture that dominated the modern age. After the collapse of the Soviet Union, the world was left with only one economic model. Therefore, when the financial collapse of 2008 took place, it created panic since it became clear that the current economic model was not suitable for the Information Age. The same problem occurred in all aspects of modern civilization, ranging from engineering to medical science. The 2008 financial crisis is known to be the worst financial crisis since the Great Depression (Beale, 2009), designed as “once-in-a-century credit tsunami” by former Chairman of the US Monetary reserve, Greenspan (2008). Nobel laureate Economist Paul Krugman attributed the collapse to the simultaneous growth of the residential and commercial real estate pricing bubbles, even though the crisis manifested in only the residential market. In other words, bubbles in both markets developed even though only the residential market was affected by these potential causes (Krugman, 2010). Notwithstanding the post-mortem conclusion that the main reasons behind that financial crisis were attributed to an unexpected rise in interest rates, the bursting of the 2005–06 housing bubble and subsequent large-scale defaulting on mortgage payments (thus intertwined with the sub-prime mortgage crisis of 2007),6 Krugman pointed out the helplessness of the conventional economic analysis tools in an economics of bubble that arbitrarily turns intangibles (e.g. perception) into tangible assets (e.g. market share) as originally outlined by Zatzman and Islam (2007). Terms, such as ‘crisis of confidence’ (Sterngold, 2010), were invented to act as a fudge factor in desperate efforts to explain away phenomena that couldn’t otherwise be explained in conventional economic terms. For instance, on September, 2008, withdrawal from money markets was $144.5 billion during one week,

versus $7.1 billion the week prior. There was no real crisis, no shortage of food, natural disaster, no war or nothing of that sort, yet there was over 20 times increase in demand of cash money. This tremendous gap in the economic metric was compensated, at least in theory, by an intangible version of the promissory note, known as the bank deposit insurance via a temporary guarantee from the government as well as Federal Reserve programs to purchase commercial paper. Notwithstanding the fact that despite these measures, the TED spread spiked and continued to stay extremely volatile reaching a record value of 4.65% in October, 2008 (Bloomber.com, 2010), it remains completely beyond all formal accounting how such transition from intangibles to tangible takes place (Gullapalli, 2008). In absence of such system of accounting, the economic tools remain entirely useless in predicting future. It is no surprise that the fear of global economic collapse persists today with pundits predicting both dooms day and boon day with equal amount of dogmatic certainty (Cho and Appelbaum, 2008; Taibbi, 2010; Langley, 2015). The 2008 financial collapse was an excellent model of the implosive nature of the current economic infrastructure for many reasons. The 2008 crisis affected the entire world in a very short time. By September 15, 2008, Lehman Brothers declared bankruptcy and in less than a month, approximately £90 billion were lost on the London Stock Exchange. Once again, without a single tangible event, economic outcome of disastrous proportions was experienced – an observation that is not quantifiable or predictable with existing economic models. Equally unexplainable is the role of Government intervention, except for the case of the lowering of interest rate by the Bank of England. It becomes convoluted as tax-payers’ money is injected into the system to prop up financial institutions (Kingsley, 2012). Numerous reports have been published on this cataclysmic financial event, with wildly diverse opinions. The general theme is that such event is natural and serves a purpose to resetting the financial picture to then continue to the same path of capitalism. For instance, Pildes (1996) suggested that a negative event such as this had the potential to deplete stocks of social capital. Such notion is often backed with empirical evidence from available surveys. For instance, the British Household Panel Survey data (BHPS) (Giordano et al., 2012) revealed that 45% of individuals reported changes in generalised trust levels over a seven-year period (2000–07). Other data show similar short-term fluctuations (Glanville et al., 2013; van Ingen and Bekkers, 2015). In essence, such a narration assumes that the overall economic outcome is independent of local events. This is in sharp contradiction to the other line of thinking that premised on the notion that generalised trust is determined in early life and is resistant to change, irrespective of laterlife experiences (Putnam, 2000; Uslaner, 2002). Instead, such fluctuations reflect the economic viewpoint that trust is a summary measure of individual experiences, good and bad (Glaeser et al., 2000). Al in all, the primary occupations of economics appears to be defining transition from tangibles (wealth) to intangibles (perception) that drive the tangibles in a ‘yin yang’ fashion. Perhaps the clearest indictment of the conventional economic analysis came from Krugman (2010), who outlined the problems with modern economic models. These problems are eerily similar to what Islam et al. (2010) outlined in the context of technology development, particularly for with relation to energy management. Here, we outline the problems identified

by Krugman.

3.4.1 Conflation Between External Beauty and Truth In post-Roman Catholic Church Europe, beauty became synonymous with symmetry, homogeneity, harmonic, and isotropic features. Curiously, Europe-centered mathematics relied heavily on these features. Unfortunately, such features are absent in nature. Krugman pointed out how impressive looking mathematics – impressive because it is elegant only in its Barbiedoll like appearance but it is ugly in its departure from natural state (as measured by its artificiality). The artificiality begins with the false first premise, following which there is no turning back, the focus being the very external features of the society. In economic terms, Capitalism has been taken as perfect or a nearly-perfect system.7 With such a false start, no matter what happened, economist found a way to explain away the results, albeit with a distorted science. For instance, when the Great Depression took place, economists held their breath in the face of mass unemployment, but soon went back to the narration that same first premise, that is Capitalism is perfect, this time is emboldened with new equations all fitted with fudge factors – functions of government intervention, politics, and politicking – that would show events like Great Depression can be shown as an accumulation of hiccups in an ‘otherwise perfect’ economy. No one questioned the first premise of all these economic models. In Krugman’s words: “But while sabbaticals at the Hoover Institution and job opportunities on Wall Street are nothing to sneeze at, the central cause of the profession’s failure was the desire for an allencompassing, intellectually elegant approach that also gave economists a chance to show off their mathematical prowess.” Instead of introducing mathematics that would introduce the big picture giving a chance to examine the underlying premises. Today we would, and could, do this with a great deal more data at our [collective] disposal, and – especially – more ways of collating everybody’s data from all kinds of sources, than people had 1500 years ago, when, for instance, we glimpse in the area of “ancient Indian mathematics” (Islam et al., 2013). Only recently we would discover that Islamic scholars were doing mathematics some 1000 years ago of the same order that we think we discovered in the 1970s (Lu and Steinhardt, 2007) – with the difference being that our mathematics can only track symmetry, something that does not exist in nature. Recently, a threedimensional PET-scan of a relic known as the ‘Antikythera Mechanism’ has demonstrated that it was actually a universal navigational computing device – with the difference being that our current-day versions rely on GPS, tracked and maintained by satellite (Freeth et al., 2006). In summary, we have no one to blame for this insanity than our own myopic view that prevented us from seeing anything beyond whatever premise will give us the fasted desired solution. Truth is not a concern for economists.

3.4.2 Misplaced ‘Faith’ in Aphenomenal Theories The history of modern Europe is inundated with theories that have aphenomenal roots, meaning that it has false fundamental premises. Krugman points out such problems in economic models.

The general problem is in their reluctance to call them out and avoid discussion of the elephant in the room: that is, that, Krugman himself repeatedly asserted that whatever falsehood is associated with the ‘prophecy’ of Lord Keynes was because of a misinterpretation of Keynes’ view – a typical argument made by Marx rationalists who continue to asset Marx was no Marxist. The elephant in the room that no one wants to talk about is the hollowness of each and every economic theoritician, whose myopia is rarely called out (Zatzman and Islam, 2007). Islam (2017) went further and disclosed the false premises behind each social science theories. The most notable of these being the theories that deal with purpose of life. This discussion is important because all economic theories emerge from this aphenomenal root. The original understanding of the purpose of human life was perverted by the Roman Catholic church to be ‘salvation through Jesus’ – a dogma the ‘enlightenment’ era replaced with replaced by notions of inalienable natural rights and the potentialities of reason, and universal ideals of love and compassion gave way to civic notions of freedom, equality, and citizenship. In all these, the definition of ‘natural’ and ‘universal’ remained arbitrary, devoid of any reasoning of logical thought. That made these notions of ‘freedom, equality, and citizenship’ more dogmatic than its predecessors. This has been the era of rollercoaster ride of spiraling down of all values through a successive degradation through ever more illogical dogmas and false premises: 1. Classical liberalism (humans as beings with inalienable natural rights (including the right to retain the wealth generated by one’s own work), and sought out means to balance rights across society. Broadly speaking, it considers individual liberty to be the most important goal, because only through ensured liberty are the other inherent rights protected. 2. Kantianism: all actions are performed in accordance with some underlying maxim or principle, and for actions to be ethical, they must adhere to the categorical imperative. Kant denied that the consequences of an act in any way contribute to the moral worth of that act, his reasoning being that the physical world is outside one’s full control and thus one cannot be held accountable for the events that occur in it. 3. Utilitarianism: “Nature” has placed mankind under the governance of two sovereign masters, ‘pain’ and ‘pleasure’, then, from that moral insight, deriving the Rule of Utility: “that the good is whatever brings the greatest happiness to the greatest number of people”. 4. Nihilism: Life is without objective meaning. A natural result of the idea is that God is ‘dead,’ and insisting that it was something to overcome. This is fighting the God that is now ‘dead’. 5. Pragmatism: Truth is whatever works to achieve given aims, and “only in struggling with the environment” do data, and derived theories, have meaning; and also that consequences, like utility and practicality, are also components of truth. The purpose of life is discoverable only via experience. 6. Theism: God created the universe and that God and humans find their meaning and purpose for life in God’s purpose in creating. 7. Existentialism: Each man and each woman creates the essence (meaning and purpose) of

his and her life; life is not determined by a supernatural god or an earthly authority, one is free. 8. Absurdism: the Absurd arises out of the fundamental disharmony between the individual’s search for meaning and the apparent meaninglessness of the universe. As beings looking for meaning in a meaningless world, humans have three ways of resolving the dilemma: 1) Suicide; 2) “Religious” belief; and 3) Acceptance of the Absurd. 9. Secular humanism: the human species came to be by reproducing successive generations in a progression of unguided evolution as an integral expression of nature, which is selfexisting. People determine human purpose without supernatural influence; it is the human personality (general sense) that is the purpose of a human being’s life. 10. Logical positivism: the question “what is the meaning of life?” is itself meaningless. 11. Postmodernism: seeks meaning by looking at the underlying structures that create or impose meaning, rather than the epiphenomenal appearances of the world. 12. Naturalistic pantheism: the meaning of life is to care for and look after nature and the environment. The ongoing war within modern European (including North American) social science rages between the majority of current scholars who consider humans as engaged purely mechanically in social, economic or political life, in which conscience and consciousness play no determining role, and a largely diffused minority who challenge this assumption from a number of directions. The bellwether of what would eventually emerge in the natural sciences, came in the earliest attempts to establish some of the new social sciences on a more rigorous basis, complete with their own “laws of motion” à la Newton in physics. The industrial revolution was already underway for a generation in Britain when the political economist Adam Smith famously put forward his theory of the so-called “invisible hand”: … every individual necessarily labours to render the annual revenue of the society as great as he can. He generally, indeed, neither intends to promote the public interest, nor knows how much he is promoting it. By preferring the support of domestic to that of foreign industry, he intends only his own security; and by directing that industry in such a manner as its produce may be of the greatest value, he intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention. Nor is it always the worse for the society that it was no part of it. By pursuing his own interest he frequently promotes that of the society more effectually than when he really intends to promote it. I have never known much good done by those who affected to trade for the public good. (Smith, 1776). Implicit in Smith’s invocation of the superiority of the individual pursuing his self-interest over the interests of society or the public, lies a notion of the shortest conceivable time-span, one in which ∆t → 0: “he intends only his own gain”. Herein lurks the aphenomenal heart of Adam Smith’s darkness: If what happens to “self-interest” is transplanted to a context in which ∆t →

∞ is considered, this aphenomenality becomes starkly evident. Self-interest in the long-term becomes the pursuit of the gain or benefit for society as a whole. Otherwise, it would be akin to dividing by zero, something that would cause the model to “blow up” (Zatzman and Islam, 2007). Significantly, Smith does not say that money-capital wedded to the short-term immediate intentions of some individuals (or grouping of common interests) would not achieve its aims. He confines himself instead to observing that objectives which formed no part of the originating set of immediate short-term intentions, viz., “an end which was no part of his intention,” might also come to be realized, thanks to the intervention of the “invisible hand”. All the defenders of, and apologists for the status quo have pointed to Adam Smith’s argument as their theoretical justification for opposing, in principle, any state intervention in the economy. Chanting their mantra of the invisible hand, policy-makers on the same wavelength, have been confining and restricting such intervention to those parts of economic space in which there operates no profitable production of goods or services with which such intervention would be competing. How sound is this chain of reasoning, however? Smith aspired in this manner to render economics as scientific as physics. Underlying Smith’s view was a set of philosophical assumptions, independent of his economic research, which formed a definitive perspective regarding the place of scientific reasoning of any kind within human thoughts, in general, which were actually very close to the outlook of Sir Isaac Newton’s. This was broad 18th-century Deist philosophical outlook, already prevalent among a broad section of the European intelligentsia of his day. Anything could be examined as the outcome of a process comprising observable, definable stages and steps, and linked ultimately to some Prime Mover (or initiating force). During the 17th and 18th centuries, for most scientists, an analysis ascribing a process to some Prime Mover manifesting itself as a Newtonian “mechanism”, was the best of all possible worlds. On the one hand, a natural occurrence could be accounted for on its own terms, without having to invoke any mystical forces, divine interventions or anything else not actually observed or observable. On the other hand, the divinity of Creation did need not to be dispensed with or challenged. On the contrary: this divinity was being reaffirmed, albeit indirectly “at a certain remove” insofar as whatever was required to sustain or reproduce, the process in question could now be attributed to some even more fundamental “law of motion”. In any event, such “laws of motion” had more fundamental properties as being indispensable and necessary; without them, no investigation could be carried very far or penetrate nature’s secrets. Re-examined in this light, the impact of Smith’s assertions about the “invisible hand” among his contemporaries can be better understood. In essence, he was declaring that: a. Economic life is comprised of phenomena that could be analyzed and comprehended as scientifically and as objectively as Newton had analyzed and disclosed the laws of physical motion of all forms of matter in Nature and even the universe; and b. Such investigations would provide yet another proof of the divinity of Man’s existence within that natural universe. If Adam Smith was the mastermind of reducing humanity into a selfish egocentric individual, John Maynard Lord Keynes would be the mastermind of minimizing the time frame for which

profit has to be maximized.8 In the words of John Maynard Lord Keynes, who believed that historical time had nothing to do with establishing the truth or falsehood of economic doctrine, “In the long run, we are all dead” (cited by Zatzman and Islam, 2007). With that assertion, Keynes further fantasized about ‘eternal equilibrium’, the duration that in real terms means a duration ∆t → 0. It was not Keyne’s original falsehood. He had borrowed it from Malthus. Reverend Thomas Robert Malthus, a British scholar, advanced the theory of rent. In his publications during 1798 through 1826, he identified various factors that would affect the human population. For him, the population gets controlled by disease or famine. His predecessors believed that human civilization can be improved without limitations. Malthus, on the other hand, thought that the “dangers of population growth is indefinitely greater than the power in the earth to produce subsistence for man”. Malthus’s theories were later rejected based on empirical observations that confirmed that famine and natural disasters are not the primary cause of population control. Many economists, including Nobel Laureate economist Amartya Sen (1998) confirmed that the man-made factor played a greater role in controlling human population. However, Malthus’s theory lives on in every aspect of European social science and hard science. His most notable follower was Charles Darwin, whose theory of natural selection is eerily similar to Mathusian theory. They are similar in two ways: 1) they both assume that humans are just another species, thus disconnecting human conscience from human being; 2) they both use natural causes as the sole factor in deciding human population, thus inferring they know the underlying program of nature science. Of more significance is the fact that Darwin was considered to be a ‘hard scientist’ whereas Malthus was considered to be a social scientist and economist (Islam et al., 2017). Darwin said, without evidence, that the emergence of a species distinct in definite ways from its immediate predecessor and new to the surrounding natural environment generally marked the final change in the sequence of steps in an evolutionary process. The essence of his argument concerned the nonlinearity of the final step, the leap from what was formerly one species to distinctly another species. Darwin was silent on the length of time that may have passed between the last observed change in a species-line and the point in time at which its immediate predecessor emerged – the characteristic time of the predecessor species – was the time period in which all the changes so significant for later on were prepared. This latter could be eons, spanning perhaps several geological eras. This idea of natural as characteristic time is missing from every European theorist. Even though it is not explicitly recognized, for obvious reasons, Karl Marx also derived his inspiration from Malthus and did little to change the premise that Malthus built his theories on. For Marx, however, human beings did have connection to the conscience but that conscience was solely dedicated to “survival”. This ‘conscience’ is not any different from what has been known as ‘instinct’ – something that every animal has. This survival, in Marx’s belief, was the reason for dialectic materialism and the class struggle. Similar to what has been pointed out in the discourse on human civilization (Islam et al., 2013), the only possibility that Marx did not investigate is the existence of higher conscience that makes a human unselfish and considerate of long-term consequences. Such a deviation from a long-term approach is strictly Eurocentric. Such addiction to a short-term approach was nonexistent prior to Thomas Aquinas and the

adoption of doctrinal philosophy. What made Marx popular was his empathy for the poor and the oppressed. His notion of capitalism being the “dictatorship of bourgeoisie”, a notion which by itself is based on the same premise of ‘human being is an evil animal’, hit a sympathetic chord with a wide range of followers. Similar to what Malthus predicted in terms of population control by famine and natural disasters, Marx predicted that capitalism would be subject to internal conflicts and would implode, being replaced with socialism. This in turn will lead to the replacement of the “dictatorship of the bourgeoisie” with the “dictatorship of the proletariat”. His theory was so convincing that the Soviet Union was formed in 1922, leading the way to many countries formally adopting Marxism as their political system. In 1949, the People’s Republic of China became communist, which meant that nearly half of the world population was immersed in a political system that can be best described as the dream application of Marx’s political theory. Marx is recognized as one of the most influential persons of all time (Hart 2000). Yet, the prediction of Marx that capitalism will be replaced with socialism and eventually give rise to stateless, classless society has utterly failed. Instead of a stateless society ruled by “workers”, socialism created the biggest and most repressive government regimes in human history. Many celebrate the fact that every promise of capitalism made in terms of a free-market economy has been broken and monopoly has become the modus operandi of the biggest corporations of such a ‘free-market’ economy, but few point out the demise of Marxist predictions in societies that did everything to uphold Marx’s ideals. The aphenomenal model started by Malthus then sanctified by Adam Smith and ‘certified’ by Lord Keynes was not called out by anyone, not even the likes of Karl Marx. The same myopic model continues to be used by all economists. For instance, Nobel laureate Economist Joseph Stiglitz set out to attribute imperfect information to the limited success of government intervention in the economy to fulfill the Keynesians’ fantasy of eternal equilibrium. To get there, however, he has first to re-explain the failure of Adam Smith’s “invisible hand” in terms of the new paradigm of the economics of information. Thus, he wrote: Perhaps the most important single idea in economics is that competitive economies lead, as if by an invisible hand, to a Pareto efficient allocation of resources, and that every Pareto efficient resource allocation can be achieved through a competitive mechanism, provided only that the appropriate lump sum redistributions are undertaken. It is these fundamental theorems of welfare economics which provide both the rationale for the reliance on free markets, and the belief that issues of distribution can be separated from issues of efficiency, allowing the economist the freedom to push for reforms which increase efficiency, regardless of their seeming impact on distribution; if society does not like the distributional consequences, it should simply redistribute income. The economics of information showed that neither of these results was, in general, true. To be sure, economists over the preceding three decades had identified important market failures — such as the externalities associated with pollution — which required government intervention. But the scope for market failures was limited, and thus the arenas in which government intervention was required were limited. (Stiglitz ibid.: 503) However, the linkage of competition, efficiency and distribution is once again the weak link of the entire argument. Confusion of value with its magnitude as represented by price disguises

the fact that its source lies in the application of living labor to raw material – using equipment, and in conditions, supplied by a party prepared to engage the laborer’s service for wages. This disguise then effects the separation of the magnitude of the value as represented by price from its source. Hence, as a theoretical proposition, it cannot possibly be competition – again, one of the conditions attending production and sale of labor-power and one not under the control of the laborers – which is responsible either for allocating resources, efficiently or otherwise, such that resources are rendered scarce for some and sufficient for others. Since Thomas Aquinas’ day in Europe, all the way through the Information age, the following path has been identified: Aphenomenal first premise → Science (only a claim) → Philosophy (poisitivism, behvariorism, humanism, mechanical materialism) → Science tangible (of technology development scheme, Newton and Lord Kelvin type) → All branches of modern education (Economics of tangible (Game theory of Bernouilli, etc.); Psychology of Sigmund Freud; Energy policy of Lord Keynes; Policy making of King James II or Papal authority; Theology; Fundamentalism; etc.)

3.4.3 The Antithesis of Prophecy of the Doom Krugman calls this undue optimism ‘the Panglossian finance’. As stated earlier, such undue optimism is necessary for justifying any event with a Pragmatic twist. Because the outlook of interest is extremely short term and the entire theory relies on ‘verification’ over a short span, these prophets of the boon always escape untarnished. Such a sentiment is ubiquitous and pundits always support optimism for their favourite conclusions and they do with the most illogical manner. On September 24, 2017, Fox Sports commentator, Hegseth screamed, “This [USA] is the least sexist, least racist, most free, most equal, most prosperous country in the history of humankind.” People called him a ‘bat crazy conservative’. So, there comes Obama, who said, “if you had to choose any moment in history in which to be born, you would choose right now. The world has never been healthier, or wealthier, or better educated or in many ways more tolerant or less violent” (Ford, 2017). A similar tune is played by other celebrities. For instance, NFL Hall of Famer, Mike Ditka said (Manzullo, 2017) “People rise to the top and have become very influential people in our country by doing the right things. I don’t think burning the flag, I don’t think protesting the country … It’s not about the country…. They are protesting maybe an individual, and that’s wrong, too. You have a ballot box, you have an election. That’s where you protest. You elect the person you want to be in office. And if you don’t get that person in office, I think you respect the other one. That’s all. Period.” Of course, Krugman himself belongs to one group, so it is convenient for him to criticize the other group while he perpetrates similarly illogical claims when it comes to defending his own conclusions. It is no surprise that Krugman fails to see the driver of this hope and fear yo-yo. Figure 3.19 shows how modern financial institutions deliberately perpetrate few and greed (often using the euphemism of ‘optimism’). It turns out, they both sell. As we will see in latter sections, both fear and optimism are part of the modus operandi that is intended to create

artificiality in the economic process.

Figure 3.19 Both optimism and fear lead to movements in the financial market, thereby stimulating economy that is stacked up against sustainability. Instead of identifying the above, Krugman cites undue optimism as the source of economic fallacy. In his words, By 1970 or so, however, the study of financial markets seemed to have been taken over by Voltaire’s Dr. Pangloss, who insisted that we live in the best of all possible worlds. Discussion of investor irrationality, of bubbles, of destructive speculation had virtually disappeared from academic discourse. The field was dominated by the “efficient-market hypothesis,” promulgated by Eugene Fama of the University of Chicago, which claims that financial markets price assets precisely at their intrinsic worth given all publicly available information. (The price of a company’s stock, for example, always accurately reflects the company’s value given the information available on the company’s earnings, its business prospects and so on.) And by the 1980s, finance economists, notably Michael Jensen of the Harvard Business School, were arguing that because financial markets always get prices right, the best thing corporate chieftains can do, not just for themselves but for the sake of the economy, is to maximize their stock prices. In other words, finance economists believed that we should put the capital development of the nation in the hands of what Keynes had called a “casino.” (Krugman, 2013). What Krugman did not see or did not want to see is the fact European history is a rollercoaster that took the general public on a ride, fluctuating between optimism and pessimism. “Fear sells wars and wars fuel economic prosperity” has been the driving mantra of modern economics. Once again, this is deeply entrenched in European history. Christianity paved the road for the concept of a flawed human nature to come to light. Of course, even the ancient Greeks (i.e. Aristotle’s period) and Chinese struggled with the nature of humans. However, Hobbes’ influential philosophy based on pragmatism and absolute power—both dangerously skewed concepts—were developed following such views on human nature. Sure enough, Hobbes, as Sorrel (1996, 219–222) elaborates, believed that people are naturally prone to competition, including violent competition, fighting out of fear, and seeking out reputation. The Leviathan, after eloquently describing these qualities of men, summarizes: “So that in the nature of man, we find three principal causes of quarrel. First, competition; secondly, diffidence; thirdly, glory,” and that people therefore use violence to “to make themselves masters of other men’s persons, wives, children, and cattle; the second, to defend them; the third, for trifles” (Leviathan, 77). The logic is that because some have these qualities at times, people are in a constant state of war, and that as a result is “continual fear, and danger of violent death; and the

life of man, solitary, poor, nasty, brutish, and short.” (Ibid, 78), as long as there is no authority or “sovereign” to govern them (Sorrel, 222; Williams 2003). Therefore, there is a need for people to get together and create an irrevocable covenant to create a sovereign (person or office) that takes all the rights of the people with the exception of self-preservation, in order to establish peace and order (Sorrel, 222; Williams 2003). Herein lies the fundamental premise of Hobbes that pertains to how to derive a ‘standard.’ Hobbes assumed that it is possible to rightfully create a sovereign that is the source of all laws and power. By believing in this, he invoked the possibility of having a process that can select a sovereign based on consensus. By ceding power to the sovereign then, the general public is no longer a party to the process of creating and enforcing laws. This means that the standard or grounding starts from the self (internal standard) to create an arbitrary external standard, that being the sovereign.

3.5 The Illogicality of Current Theories The most astounding contradictions are rooted in the very driver of the modern epoch, viz., Economics. Economics – the original meaning being ‘to economize’ in the sense of arranging discharge of responsibilities of domestic household management to a level of maximum efficiency – has been turned into an avenue for wasteful schemes by banks and industrial conglomerates, prompting World Bank employee and Nobel laureate economist Joseph Stiglitz to castigate the International Monetary Fund (IMF) for offering ‘remedies’ that make things worse, turning slowdowns into recessions and recessions into depressions. This institutionalization of preposterous schemes is the hallmark of modern age. It involves turning real into artificial while increasing the profit margin (Figure 3.20).

Figure 3.20 Modern science and technology development schemes focus on turning natural into artificial and assigning artificial values, proportional to the aphenomenality of a product. As per the Lewis model of a dual economy (Lewis, 1954), much of the low-wage sector has little influence over public policy. This is true for USA as a nation and the world as a group of

nations, for whom the ‘poorer’ nations garner no control over their future. The second feature of the Lewis model (Figure 3.21) is that the high-income sector will keep wages down in the other sector to provide cheap labor for its businesses. It is true in USA, where the entire debate is based on minimum wage whereas the salaries of the CEOs are not in dispute, even during the worst of financial crisis (Gabaix and Landier, 2008). This is reminiscent of global politics that hold the poorest of countries to the highest of standards in terms of environmental accountability, creating global mismatch and financial extremism (Althor et al., 2016). Similarly, social control is used to keep the low-wage sector from challenging the policies favored by the high-income sector, often manifested through mass incarceration with a system that is enriched at the expense of injustice. What happens in the USA also happens on a global scale, with sanctions imposed on poorer countries that do not conform to the globalist agenda. The last feature of the Lewis model is how polticians are manipulated to help reduce taxes on the richest members. This modus operandi is reflected in the numerous double standards imposed by World Bank, International Monetary Fund (IMF), and others.

Figure 3.21 Lewis Dual Economy thrives on the existence of inherent disparity.

3.6 The Delinearized History of Modern Economics The very word ‘economics’ is in the core of humanity and human society as it deals with the science of goods and services. As such, economics is as old as human society. In the words of Aristotle, “the human good turns out to be activity in the soul [mind] in accordance with excellence.” In other words, the good life is activity that involves rationality and embodying excellence over an entire lifetime. Aristotle’s own definition for the human was “political animal.” He believed that the most essential differentia about humans was their cultural behavior—their collective life in the polis (the city). Today, as we glean over the sociopolitical events, it becomes clear that the discussion of the purpose of life is integral to all aspects of life, including engineering. Similar to Aristotle, the Neoplatonic philosopher Porphyry defined man as a “mortal rational animal”, and also considered animals to have a (lesser) rationality of their own. It is not clear why any philosopher would use the word ‘mortal’ for describing humans. Because every living creature is subject to death, the use of ‘mortal’ is redundant and can highlight the possible

conflation with legends of gods or immortal entities. Notwithstanding the invocation of God, both Aristotle and Porphyry advance the same notion that rationality sets humans apart from other living creatures. It was the Roman Catholic Church that injected irrationality into human cognition through the introduction of the theme of the Trinity, and later the concept of ‘original sin’ (Khan and Islam, 2016; Islam et al., 2015). We also argued that philosophers from the Enlightenment era were no different from the philosophers of the Roman Catholic Church. As an example, one can cite Descartes who advanced his famous but categorically unscientific claim, “I think therefore I exist”. He then goes on to wonder “What am I?” He considers and rejects the scholastic concept of the “rational animal,” and that he does without any rational explanation, as he continues: “Shall I say ‘a rational animal’? No; for then I should have to inquire what an animal is, what rationality is, and in this one question would lead me down the slope to other harder ones.” The only thing that remains true is that there is a mind or consciousness doing the doubting and believing its perceptions, thus replacing Roman Catholic Church’s ‘God’ with ‘the Self’. This was only the beginning of what we call the spiralling down roller coaster ride (Islam et al., 2013). Instead of pointing out the obvious false assumption of Descartes, Freud took it as an inability of humans to look at their weaknesses. Instead, Freud simply characterized the reluctance to seek explanation as too much “stress on the weakness of the ego in relation to the id and of our rational elements in the face of the daemonic forces within us”. Freud was not religious enough to recognize ‘daemonic force’ to be part of ‘original sin’. At the same time, he was not ‘naturalist’ enough to introduce the socalled ‘original gene’ concept that has become the regurgitated version of the ‘original sin’ model (Islam et al., 2015). At the end, it is the set of uncoordinated instinctual trends; the super-ego plays the critical and moralizing role; and the ego is the organized, realistic part that mediates between the desires of the individual and the super-ego. It is also expected that the super-ego can stop one from doing certain things that one’s id may want to do. The entire process of cognition or ‘thinking’ that makes humans unique is disconnected from conscience or a higher purpose of being. Neo-Kantian philosopher Ernst Cassirer, in his work An Essay on Man (1944), altered Aristotle’s definition to label man as a symbolic animal. This definition has been influential in the field of philosophical anthropology, where it has been reprised by Gilbert Durand, and has been echoed in the naturalist description of man as the compulsive communicator. Once again, inherent qualities of humans that enable them to think with a purpose (i.e., to think conscientiously) are not recognized and are in fact shunned. On the sociological side, another sinister plan was concocted. Sociologists in the tradition of Max Weber distinguish rational behavior (means-end oriented) from irrational, emotional or confused behavior, as well as from traditional-oriented behavior, but recognize the wide role of all the latter types in human life. None of them speak of the purpose of life or its role in conscientious thought. Perhaps more importantly, none of them even wondered what role rationality plays in decision making, ranging from personal life to global politics. Ethnomethodology sees rational human behavior as representing perhaps 1/10th of the human condition, dependent on the 9/10th of background assumptions which provide the frame for means-end decision making. However, none of it had any scientific backing nor was it free

from grossly illogical assumptions. Philosophers reached a nadir when Bertrand Russell satirized the concept that man is rational, saying “Man is a rational animal — so at least I have been told. Throughout a long life I have looked diligently for evidence in favour of this statement, but so far I have not had the good fortune to come across it.” The humor of his observation derives from an equivocation between describing mankind as rational (i.e., that all members of the species have the potential to think, whether that potential is realized or not), and describing an individual person as rational (i.e., the person actually can think well, avoid biases, make valid inferences, etc.). How is it possible that such giants of philosophical thought missed such an obvious yet logical description that a conscientious decision makes a person rational and his ability to cognize rationality can be obscured by his lust for short-term reward? We explained this behavior as the inability to move past the false premise, often characterized as cognitive dissonance or more dramatically noted as ‘vibrating on false premises’. These philosophers are almost intoxicated in their self-righteous hubris. In the words of Krugman, “economists were congratulating themselves over the success of their field. Those successes — or so they believed — were both theoretical and practical, leading to a golden era for the profession. On the theoretical side, they thought that they had resolved their internal disputes.” Perhaps the most important contribution of Aristotle was that he recognized the role of tangibles in defining what can influence the intangibles. By recognizing this role, Aristotle in fact linked thoughts (rational or otherwise) to the environment, which includes the society. Thus the form of something does not exist independently; it is not an entity in itself. Rather it is the specific pattern or structure or form of a thing, which defines how it exists and functions. Thus, for Aristotle, it makes no sense to talk of a soul or mind without a body, for the essence of a person is embedded and intertwined with their matter. We observed previously (Islam et al., 2017) that the role of environment in defining thoughts is not trivial and has eluded most of the social scientists of the modern era. To some extent, Bronfenbrenner (1979) recognized the same, but only in the context of child development. Philosophers have accorded an exception to what they viewed as Aristotle’s take on humanity by stating that divine intellectual functioning may take place without a body. This becomes a paradox if one draws a parallel with computers. For example even if computers ‘think’ without bodies their thought still depends on material components. The missing point here is that the thought process that makes humans unique is the connection to the ‘soul’ that is absolute and does not change with the environment. That soul is inherently connected to true intangible or creator. This was a known concept until the time of the Enlightenment. Aristotle’s major distinction between rational component and emotional and desire components were taken as yet another paradox and in contradiction to Plato’s model. However, such contradictions do not arise if one considers desire-driven and conscientious thoughts as instinctive or short-term and long-term processes, respectively. Every human has the propensity to long for a short-term solution or short-cut, but long-term success depends on how many conscientious decisions a person has made. Sustainability of any scheme lies in the longterm option. With the way Aristotle connects the purpose of life to long-term success (see

Figure 3.22), it becomes clear that he endorsed the long-term approach.

Figure 3.22 Sustainability can be defined as the inevitable outcome of a conscientious start (1: phenomenal start with phenomental intention; 2: aphenomenal start and/or aphenomenal intention). Figure redrawn from Khan and Islam (2016). The problem with regulatory control by the government is the fact that Europe has lost its standard for moral authority. As we will see in the latter portion of this section, Ibn Khadun (d. 1406) did have such a standard that is universal and with divine authority but when European philosopher emulated his model, they imposed arbitrary universality and put themselves in place of divine authority. Yet, the only intervention a government invoke is the conformance of the universal moral standard that in turn would dictate all policies, including economic ones. So, today when a government talks about intervention such intervention is an oxymoron in the context of ‘free’ economy. The good has always been characterized by the intention to serve a larger community, thereby assuring the long term of an individual, while evil has been characterized as the intention to serve self-interest. This is entirely consistent with Aristotle’s take on humanity. Aristotle also subscribed to the notion that nature is perfect, and both animate and inanimate objects function perfectly. It follows that the tangible part of humans falls under the universal order and is independent of their control. However, humans also have intellect or rationality. It is this rationality and its practical application that make humans unique. Humans are capable of drawing upon their experience and blend with their inherent qualities, such as temperance, courage, sense of justice and numerous other virtues. Creating a balance between various extremes is the objective of a successful person. In his words, these virtues are “the mean between the extremes.” A life of virtue is the ideal for human life. This is entirely consistent with the time-honoured standard of public governance and rule of engagement in foreign policies outside of Western hemisphere (Islam, 2016). In contrast to the state of virtue comes the state of vice, which necessarily involves a shortterm approach. Plato as well as Aristotle understood this ‘vice’ as something driven by desire, which is inherent. It is not because of the propensity to sin (similar to what is stated as ‘original sin’), it is rather because humans have this inherent weakness to take the ‘short-cut’,

which leads to deciding on a short-term approach. This weakness has been exploited in Eurocentric culture and is reflected in the so-called ‘maximize pleasure and minimize pain in the short-term’ model. Figure 3.23 shows how the balance between individual liberty and regulatory control is made, corresponding to the two extremes of the universal standard. At the end, what we have is an optimization of two contrasting trends. If regulatory control is increased, one is not expected to have any accountability, and a test loses its meaning. On the other hand, if individual liberty is excessive, then it leads to anarchy and, at the same time, accountability skyrockets, making it impractical for humans to achieve their full potential. The intersection of these two graphs represents the optimum that in Aristotle’s word is the ‘middle of the extremes’.

Figure 3.23 Good behaviors in humans lie within optimum regime of individual liberty (redrawn from Islam et al., 2017). At this point, it is worth noting that Aristotle defined happiness as a goal that is achieved by exercising good virtue over the course of one’s lifetime. In other words, by being able to truly make an intention of doing long-term good, one can achieve happiness in the short term as well as the long term. The connection between the purpose of life and making the intention to meet the purpose of life with happiness was recognized by Aristotle in no uncertain terms. He said that realizing one’s own capabilities by intellectually considering the substance of one’s happiness is the first step to achieving happiness. These considerations provide the contemporary actual material foundation for the notion that happiness can only be sustained by increased consumption. The notion itself has become central to Eurocentric outlook since the time Jeremy Bentham proclaimed his Utilitarian doctrine of the best society in the increasingly industrialising world economy of the late 18th century being that which provides “the greatest good for the greatest number”. Bentham’s philosophy was explicitly converted into economic form by William Stanley Jevons (discussed extensively back in Chapter One). What Jevons elaborated – which took matters further than, and down a path not anticipated by, Bentham – was to link individual happiness generated by material consumption to the ultimate destiny of individuals as economic actors. That linkage of happiness and destiny of entire societies through successive and endless acts of material consumption by its individual members was implicit in, and undergirded, Jevons’ most celebrated dictum that “Value depends entirely upon Utility” (Jevons 1870), i.e., that the individual seeking to satisfy some need determines the value of objects brought for sale in the market by whatever amount s/he is prepared to pay to acquire the good or service that fulfills

the need. Miring itself since the days of Jevons, Walras, Menger and Marshall in neo-classical notions of “consumer sovereignty” and similar related notions tying happiness to consumption as Humanity’s destiny, conventional economics has erased any other prospect or possibility for happiness not tied to a vast engine of material consumption. Human needs, examined in the large, loom as something massive – as the totality of what billions of humans need. However, what humans need as individual or individual family units (no matter how extended) – in other words: what per capita requirement the material productive apparatus needs to supply – is a vastly different matter. As Figure 3.24 serves to illustrate, in much of the world not informed by the Eurocentric standpoint, “happiness” really stands for an end-point, a destination, rather than some all-consuming appetite to be fed continually regardless of actual need, the driver for building and maintaining the “economy” as an engine for producing ever-mounting piles of waste (Zatzman and Islam, 2007).

Figure 3.24 Origins of the Arabic word for “happiness” – a non-Eurocentric view. It then follows that a person would face numerous challenges and has to strike a balance between the passions of long-term thoughts and the temptations of short-term desires. The longterm decisions bring both short-term and long-term goods to the society but most often bring only long-term good to the person. Every long-term decision will have a short-term negative consequence. Zatzman and Islam (2007) considered that struggle between two opposing forces in terms of spending money in charity. Consider the long-term investment concept as illustrated in Figure 3.25. The outstanding feature of this figure is that endowments and charitable giving in which there is no return to the investor – not even an “incentivized” kickback in the form of a deduction on the investor’s income tax liability – generate the highest rate of return for the longest investment term. In effect, the more social and less self-centered the intention of the investor, the higher the return. So, every act of charity must overcome the short-term desire to spend the money, say, on another luxury gadget or another dessert that will likely make worse an existing habit of overeating. When that temptation is overcome, both the society and the long-term interests of the person investing will benefit greatly.

Figure 3.25 Maximizing the Rate of Return on Investments for Others – This figure illustrates one prospect that becomes practically possible if intangible benefits are calculated into, and as part of, a well-known conventional treatment of investment capital that was developed initially to deal purely with tangible aspects of the process and on the assumption that money would normally be invested only to generate a financial return to its investor (From Zatzman and Islam, 2007). This line of analysis changes the paradigm of economic considerations entirely. Furthermore, as illustrated in Figure 3.26, it is inherently reasonable to anticipate improved revenue performance for the enterprise that can bank on employer-employee trust. At a personal level, it means that if we can increase our activities that are motivated by long-term gains, we would be able to increase our happiness at this moment as well as in the long term.

Figure 3.26 Sensitivity of business turnover to employer-employee trust – Under a regime guided by the norms of capital-dependent conventional economics, trustworthiness counts for nothing. Under an economic approach that takes intangibles into account, on the other hand, revenue growth in an enterprise should be enhanced. Thomas Aquinas (1225–1274), ‘the father of doctrinal philosophy’, took the logic of Averroes and introduced it to Europe with a simple yet highly consequential modification: he would color the (only) creator as God and define the collection of Roman Catholic Church documentation on what eventuated in the neighborhood of Jerusalem over a millennium earlier as the only communication of God to mankind (hence the title, bible – the (only) Book. Even though, Thomas Aquinas is known to have adapted the logic of Averroes, his pathway as well as prescribed origin of acquiring knowledge was diametrically opposite to the science introduced by Averroes. This is because the intrinsic features of both God and bible were dissimilar to the (only) creator and Qur’an, respectively (Armstrong, 1994). For old Europe and the rest of the world that it would eventually dominate, this act of Thomas Aquinas indeed became the driver of a wedge between two pathways, with the origin, consequent logic, and the end being starkly opposite. Note the following conclusions of Thomas Aquinas: Any exception is possible, if deemed convenient There is an intelligent person Authority has access to knowledge It is no surprise that all the philosophical theories in Europe, as outlined earlier, have the above false premises imbedded in them. The sophistry added has been the introduction of Nature and natural to replace God and Divine, respectively. Table 4.2 Various theories and fundamental premises behind them (conclusions are not necessarily that of the scientist that posited the premise). Name of the scientist

Premise

Conclusion

Ancient Greek philosophers Ancient Indian philosophers

All matter originates from Universe ruled by unique set of laws; purpose the Creator and to the of creation is to have these laws executed; creator all matter returns. Humans ultimately accountable for their obedience to that unique set of laws; Humans responsible for humanization of the Ancient Chinese environment; philosophers Humans judged by their intentions*; Humans Middle eastern are representative of the Creator*; Earthly life prophets is a test*; Eternal life is the ultimate reality (Abraham, Moses, Jesus, Muhammad) Aristotle A ‘substance’ that filled all Absolute speed relative to that ‘substance’ is the universe infinity. We see matter because something emerges from our eyes for us to be able to see. Everything is either A or A human (mortal) cannot be god (immortal) not A. Thomas Aquinas God, Son of God, Holy Any exception is possible, if deemed Spirit, all can exist in one convenient Everything is governed by There is an intelligent person intention of the creator. Time is a property of matter All knowledge comes from Authority has access to knowledge the Bible Averröes All true knowledge comes A good first premise produces correct from the Qur’an cognition and vice versa Every person has access to knowledge Ibn Haitham Everything about the No creation can have a speed of infinity creator is unique (Only No creation can have constant speed Creator can be infinity and external to the universe). Every creation is internal and connected to each other Thoughts are internal Everything, including time, Universal order is unique and absolute matter, and thought, originates from and is

controlled by the Creator Humans have unique purpose and are judged based on their intention

Accountability is based on intention that has no bearing on universal order

Source leaves a signature Light source affects the quality of light on whatever emerges from A good source emits good light and vice versa it Newton

Dalton Maxwell

Lord Kelvin

Einstein

Feynman

There is a steady state There is a state of uniform velocity There is an external force Light travels in waveform God interferes with universal order All matter comprised of solid spherical, rigid balls All energy forms comprised of solid, spherical, rigid balls Luminiferous, all pervasive ‘substance’ exists (see Premise 2) Ether (all pervasive ‘substance’, see Premise 2) exists Flying is an absurd concept The universe is evolving toward ‘heat death’ God doesn’t play dice Light has constant velocity

First law of motion First law of motion

Energy comprised of solid spherical balls Time is a perception of individuals Every matter has numerous historical paths

Energy is disconnected from mass

Second law of motion Newton’s wave theory Atomism is true Photons are uniform and independent of the light source Speed of light relative to this ‘substance’ is variable

Degree of chaos increasing The time function is deterministic and exact Light is uniform and static

Humans control time Creation from nothing is a continuous process Reality is a chaotic process

Observation affects the history

Reality is subjective Past can be affected by the present

Stephen Hawking Creation of everything from nothing through Big Bang of an infinite mass and zero volume

The universe is expanding with slowing rate of expansion

Saul Perimutter and Brian Schmidt (2011 Nobel Laureates) James Quach

The universe is expanding with accelerating rate of expansion

Creation of everything from nothing through Big Bang of an infinite mass and zero volume Creation of everything from nothing (amorphous state) through crystallization (Big Chill) Dmitri Krioukov Creation of everything from nothing through Big Bang with further expansion of brain-like fractals



Universe is a super-intelligent design

3.6.1 The Source of the Economics Track Thomas Hobbes and John Locke represent the two extreme sides of European modern philosophy. Hobbes invokes no God but Locke does, yet neither of them come anywhere close to Ibn Khaldun’s model of social science. Islam’s (2016) recent analysis reveals that both Hobbes and Locke invoked themselves as the law giver, making up standards that are equally illogical as dogma. They make no contribution to advancement of human behavior or psychological grounding of a person or a society. Hobbes believed that people are naturally prone to competition, including violent competition, fighting out of fear, and seeking out reputation. The Leviathan, after eloquently describing these qualities of men, summarizes: “So that in the nature of man, we find three principal causes of quarrel. First, competition; secondly, diffidence; thirdly, glory,” and that people therefore use violence to “to make themselves masters of other men’s persons, wives, children, and cattle; the second, to defend them; the third, for trifles.” The logic is that because some have these qualities at times, people are in a constant state of war, and that as a result there is “continual fear, and danger of violent death; and the life of man, solitary, poor, nasty, brutish, and short” (Leviathan, p. 78). This occurs as long as there is no authority or “sovereign” to govern them (Sorrel, 1996). Therefore, there is a need for people to get together and create a non-revocable covenant to create a sovereign (person or office) that takes all the rights of the people with the exception of self-preservation, in order to

establish peace and order. Here lies the fundamental premise of Hobbes that pertains to how to derive a ‘standard.’ In search of such a standard, Hobbes invoked the possibility of having a process that can select a sovereign based on consensus. By ceding power to the sovereign, the general public is no longer a party to the process of creating and enforcing laws. This means that the standard or grounding starts from the self (internal standard) to create an arbitrary external standard. Locke, on the other hand, had a different perception of how government should be set up and operated, as well as how the standards would be enforced by the government. Locke being considered the ‘father of classical liberalism’, it is safe to assume that he was in favor of creating government by consent (Moseley, 2005). In terms of creating a political standard, Locke’s notion of universal natural law contrasts sharply with that of Hobbes, for whom the laws have the single purpose of the creation of the sovereign. Also, Locke believed that sovereignty is in the people, in contrast to Hobbes who believed sovereignty belonged to the Sovereign. Locke believed that ‘God’ endowed man with natural laws that govern people in a state of nature (Forde, 2011). From this, our fundamental rights are derived from the premise, “no one ought to harm another in his life, health, liberty or possessions; for men being all the workmanship of one omnipotent and infinitely wise Maker” (Locke, 2014), which we have a right to defend and punish transgressions. When people get together and set up government by their consent, the government is to provide these rights to the people and enforce the law for the “common good” (Locke, 2014). The standards of Hobbes, who believed that the original source of authority that allows for any government to exist originates in the people. However, such standards entail the formation of only one type of government, a (likely) tyrannical and authoritarian one (to say the least).9 Once set up, the rights (with an exception to self-preservation) of individuals are given up to the sovereign that becomes the legal standard, viz., the source of law and executioner. There are two main problems that we encounter in Hobbes’ theory regarding its logical continuum. The 3rd of Hobbes’ 19 laws of nature state that people ‘ought to set up a sovereign (Bobbio 1993, 148), leads to groups of people choosing their own sovereigns, each sovereign creating its own standards of law. This implies that; a) there may logically be several different legal standards for one human race, and; b) the sovereign itself is able to exercise tyrannical power, to oppress and suppress the people who set him up, an extraordinary amount of optimism is put into the notion of the sovereign not abusing his power (Wolfenden 2010, 1), which may eventually lead to internal chaos, and; c) even if the sovereigns of the world are all righteous, there is nothing stopping an endless amount fighting between states,10 posing obvious problems that Hobbes himself would seem averse to (as the purpose of the state is to create peace and order). In other words, Hobbes has no vision for the whole of humanity, or solution to stopping things that he himself would consider terrible (war, and civil war/uprising), as we will discuss later in the section. Note that this dichotomy arises from the assumption that the only two options for humanity is to either descend into a constant state of war, or to have a sovereign dictate everything to the people (Williams 2003).

If this assumption is false, there is absolutely no basis for the rest of Hobbes’ theory. In essence, Hobbes’ theories have no global or moral standard, for an individual or for the society. This goes Ibn Khaldun’s model (further discussed in Chapter 6) is straightforward in the direction of legal derivation with a standard. As civilization progresses, there is exponentially more standardization based on the globally applicable criteria, with the expected goal of global peace and tranquility. Hobbes’ model is also straightforward and similar to that of Ibn Khaldun’s, but in the opposite direction (legal derivation without a global standard), as the standard (the sovereign) is created by individuals. As Figure 3.27 depicts, Hobbes’ theory can be seen as a ‘last-ditch’ attempt to solving the equation of how to have a government when there are no global standards. In this sense, we do not have a problem aligning Hobbes with the original thoughts behind (the raison d’être) Machiavelli’s pragmatic government philosophy as summarized in Discourses on Livy. Hudson (1983, 142) reviews the Hobbsian notion that the source of moral authority is the sovereign, and the sovereign institutes laws upon the people. Therefore, as stated before, there is no global ethic to what is right and wrong, thus disconnecting people and their purpose—what Christianity did when they disconnected natural traits with reality. It is also inundated with contradictions and paradoxes. For example, different people, even proximal neighbours, can have radically different notions of “right” and “wrong” imposed upon them, leading to what constitutes “right reason” according to Hobbes (Ibid, 139). Figure 3.27 depicts the direction of ‘legal derivation’, the process by which law is discerned, for each of the theorists’ models. The standard Hobbes creates does not apply globally, or even nationally, in fact, even the smallest of communities can create their own standards. Conversely, this model takes us into the direction of the opposite of the goal of a global order, into an order that fosters chaos. Hobbes’ process of his political model derivation is more rapid than that of the obvious philosopher. This is evident upon examination of the so-called ‘laws of nature’—which are merely a justification to create an all powerful sovereign (the model he had in mind in the first place). As the figure depicts (Figure 3.27), Hobbes’ model swerves in the direction of legal derivation relatively earlier in time, because the purpose of his natural law is to create government as fast as possible. As we have seen earlier, the promise of Hobbes’ model itself is contradictory and can only result in chaos. Curiously, as we have evolved into the nadir of liberalism, everyone is promised with legitimacy of his/her claim or righteousness. This matter has become a major crisis and a hotbed of political debate. Locke’s model starts off in the direction of using reason and acknowledging the moral superiority of the Creator. This is when Locke mentions the existence of a universal natural law and possibly a future vision for mankind. For different reasons, Locke’s philosophy will cut off and goes into the direction of chaos. This is inevitable when there is no defined logical criterion to determine if an action is “moral”, “just”, or “ethical”. In essence, both Locke and Hobbes have only looked into the chaotic model (in absence of adherence to shari’ah), albeit attempting to retrofit it as a standard model, leading to the absurdity of Caliphate without shari’ah and humans without conscience.

Figure 3.27 The figure depicts the direction of ‘legal derivation’, the process by which law is discerned, for each of the theorists’ models. The biggest blunder of New science was to adopt Thomas Aquinas’ model but replacing God with ‘nature’ in social science sense and ‘universe’ in hard science sense. The immediate outcome of this arrangement was the disconnection of conscience from human cognition. This was followed by intention. In contemporary Western society, there is an all-pervasive perception that intentions do not count. Nobel Laureate Linus Pauling – prizewinner both for Chemistry and Peace, transmuted his work into the notion that humanity could live better with itself, and with nature, through the widest possible use and/or ingestion of chemicals. Essentially, his position is that “chemicals are chemicals,” i.e., that knowledge of chemical structure discloses everything we need to know about physical matter, and that all chemical combinations sharing the same structure are identical regardless of how differently they may actually have been generated or existed in their current form (Pauling 1954). Note that this notion actually is much older, going back to Democratus, who introduced the notion of atomism and that notion was taken as fact by Aristotle, Newton and all contemporary scientists (see Islam et al., 2010 and Khan and Islam, 2016 for a detailed discussion). Figure 3.28 shows how illogical premises have been either adopted without questioning its legitimacy or worse have been replaced with more illogical premises. In this process, there have been two different tracks. On the one side, we have the pro-morality scholars while on the other side we have more ‘secular’ scholars, both sides promoting the same model that was once touted by Thomas Aquinas. Overall, New science as a discipline has traveled from a flat earth theory to a flat universe theory; from the Trinity to infinite god (desire being the god); from God, Son, and Holy Ghost to Money, Sex, and Control; from Church, Monarch, and Feudal lords to Corporation, Church, and Government. Constantly, absurdities have been introduced as ‘science’. All these have the same driver as once controlled the Roman Catholic Church, that is, Money.

Figure 3.28 Both scientific and social theories have invoked aphenomenal premises that have become increasingly illogical.

3.6.2 Most Influential Economists of Modern Era Let us review some of the most influential economists of the Atlantic world, along with their first premises and respective ‘myopic outlook’ that prevent them from understanding the notion of the ‘big picture’ or ‘long term’ as described in the previous chapters of this volume. 3.6.2.1 Adam Smith (1723–1790) This political economist is considered to be the ‘Father of Modern Economics’. Smith argued for free trade, market competition and the morality of private enterprise in his book: The Wealth of Nations. This work turned out to be the foundation for economic policies around the world. We have deconstructed his premises that involve the recognition of ‘invisible hands’ that is similar to Newton’s ‘external force’. 3.6.2.2 Karl Marx (1818–1883) Although more recognized as a political activist of communism, Marx is actually a classical economist, whose ideals differ little in fundamental premises of Lord Keynes and, hence, Adam Smith (as discussed earlier). Adam Smith was secular in the sense that he did not believe that Christianity can provide one with answers to political problems. Marx, on the other hand, was an atheist and on a practical level, they both subscribed to the notion that morality has nothing to do with Divinity. While Adam Smith predicted fluctuations in the free market economy in order to reach an equilibrium – a notion maintained by Keynes and all contemporary economists,11 Marx attributed these fluctuations to capitalism itself and predicted implosion of capitalism. The implosion would then give rise to the rise of socialism

that would eventually reach an equilibrium. Marx was unique among European philosophers to connect human intention and conscience to the economic system (Zatzman and Islam, 2007). 3.6.2.3 John Maynard Keynes 1883–1946 British economist, John Maynard Keynes recognized that free markets would not automatically provide full employment and a euphoric state of equilibrium. In order to reach that state, free market economy needed ‘external’ intervention and that external body happened to be the government. This notion builds on the assertion that democracy is in itself a natural state and the government that arises from a democratic process is therefore a driver of this natural state. As such, Keynes proposed that state intervention is required during boom and bust cycles of the economy. Ever since, western economies have seen frequent intervention by the government during both conservative and liberal governments. Clearly, there is no divergence in economic policies between two extremes of democracy. Today’s economics is governed by Keynenian philosophy and its revised form, but from where did Keynes derive his inspirations? It certainly cannot be logic. Wood (1994) points out, “But who inspired Keynes? Hardly any example could be found in the economic literature. However, there was another possible source of inspiration: his experience as a “portfolio manager” operating on the markets for stock, bonds and bank credit. In this interpretation there was nothing inherent to the economic situation in the 1930s that compelled an explanation of the crisis and slump in one way or another. “Visions” (as Shcmeter called them) had been pre-formed and they dictated how the phenomena of the real world were perceived” (Wood, 1994, p. 244) 3.6.2.4 Alfred Marshall 1842–1924 (by Pareto, Jevons, Bentham) Marshall is the father of supply and demand model. Scientifically, his “Principles of Economics”, are akin to Newtonian linear models. When supply and demand are related with a single relationship, excluding other phenomena that are no doubt related albeit their effects are not captured through direct relationships, the implicit assumption is, nothing else matters in a ‘free market’ economy. There are three especially crucial premises of the supply and demand linear functionality that are hidden: a. unit costs of production can be lowered (and unit profit therefore expanded) by increasing output Q per unit time t, i.e., by driving ∂Q/∂t (temporal rate of change of Q) unconditionally in a positive direction; b. only the desired portion of the Q end-product is considered to have tangible economics and, therefore, also intangible social “value,” while any unwanted consequences – e.g., degradation of, or risks to, public health, damage(s) to the environment, etc. – are discounted and dismissed as false costs of production; c. demand of a product in a society, flooded with perception changing advertisements, is immutable and is dictated by the natural need of society. Note that, if relatively free competition still prevailed, premise (a) would not arise even as a

passing consideration. In an economy lacking monopolies, oligopolies, and/or cartels dictating effective demand by manipulating supply, unit costs of production remain mainly a function of some given level of technology. Once a certain proportion of investment in fixed-capital (equipment and ground-rent for the production facility) becomes the norm generally among the various producers competing for customers in the same market, the unit costs of production cannot fall or be driven arbitrarily below a certain floor level without risking business loss. The unit cost thus becomes downwardly inelastic, i.e., capable of falling readily below any asserted floor price, under two conditions: a. during moments of technological transformation of the industry, in which producers who are first to lower their unit costs by using more advanced machinery will gain market shares, temporarily, at the expense of competitors; or b. in conditions where financially stronger producers absorb financially weakened competitors. In neoclassical models, which all assume competitiveness in the economy, this second circumstance is associated with the temporary cyclical crisis. This is the crisis that breaks out from time to time in periods of extended oversupply or weakened demand. In reality, contrary to the assumptions of the neoclassical economic models, the impacts of monopolies, oligopolies, and cartels have entirely displaced those of free competition and have become normal rather than the exception. Under such conditions, lowering unit costs of production (and thereby expansion of unit profit) by increasing output Q per unit time t, i.e., by driving ∂Q/∂t unconditionally in a positive direction, is no longer an occasional and exceptional tactical opportunity. It is a permanent policy option: monopolies, oligopolies, and cartels manipulate supply and demand because they can. The assumption c) excludes the role of advertisement in creating false demand by promoting products that have no tangible benefit and are based on wasteful lifestyle. Products are created in order to maximize profit as the primary impetus, often leading to maximizing waste. Such products as well as the economics that justify having these products in the market are inherently unsustainable. While need-based demand is natural, what we have is greed-based addiction-like artificial demand. Figure 3.29 Shows how demand can be artificially created and will invariably lead to unsustainability as expressed in ‘economic crises’. There is no natural state as none of these crises emerge from natural events. They are the product of greed-driven economy as opposed to need-driven economy that invariably creates a natural equilibrium. This upward curve is conducive to zero-waste engineering.

Figure 3.29 Real demand reaches equilibrium whereas artificially created demand leads to implosive economic infrastructure that ends up with a crisis. Ibn Qayyim Al-Jawziyyaah as well as his predecessor Ibn Taymiyyah introduced the relationship between supply and demand, i.e., in pointing out that the price increases with a lack of goods in a market or increased demand, or vice-versa in a non-monopolistic or oligopolistic market (i.e., with more or less perfect competition) (Islam, 2018). However, his analysis was for natural demand and supply: i.e., demand supplied by the need of a person to live a comfortable life, and the supply which is subject to the labour, manufacturing shipping, and other costs, which may be interfered with by other natural events: e.g., natural disasters, droughts and famines, etc. By contrast, Alfred Marshal failed to distinguish between artificial and natural demands. Just like Ibn Khaldun’s natural social science model was regurgitated by practically all modern social scientists, all of whom discarded the notion of Caliphate (that would be the natural state) and focused entirely on the implosive ‘empire model’, modern economists built on the implosive ‘empire model’ and considered only the greed-driven economic system. When a government is called upon to fix this crisis, more money is siphoned from the public into the corporation that created the artificial demand. This government intervention only makes matter worse in economic terms and leads to more disparity and economic extremism. Another aspect is that the transition between intangible and tangible. In natural manufacturing, the value of a product is increased by improving its utility. This fundamentally assumes that real value is added. When a price is given on knowledge, what justifies different values for the same fact, let alone high prices for falsehood? For instance, in the information age, data are routinely collected from the social media and sold to various corporations of government agencies. Where is the value addition? And, how would this be amenable to supply and

demand analysis? Figure 3.30 shows how false information can create false intelligence and compromise real value of knowledge. While true information and true intelligence leads to total transparency, false information and/or sinister spin-doctoring can lead to increasing involvement of ‘hidden hand behind hidden hand’, often packaged as Adam Smith’s ‘invisible hand’ mode of economic intervention. This figure shows how false information turns into ignorance and eventually replaces good governance and rule of law with mobsterism and the rise of ‘hidden hand behind hidden hands’.

Figure 3.30 Knowledge has to be true; otherwise it will create false perception and total opacity in the economic system. 3.6.2.5 Milton Friedman 1912–2006 (influenced by Alfred Marshal) The transition from intangibles to tangibles took a new negative turn in the 20th century. Nobel prizes in economics were all reserved for yet another version of the opacity model, as described in Figure 3.30. Most prominent of this series of economists is Milton Friedman, who was an active supporter of Alfred Marshal’s economic models. Similar to Adam Smith, he did not believe in mixing religion with politics, and his morality had no place for God, thus following the Hobbesian line of philosophy. He was awarded the 1976 Nobel Prize for Economics, for his work on consumption analysis, monetary history and theory, and stabilization policy. He considered all economic activities of Capitalism as natural and part of the process that would eventually reach equilibrium. This was the beginning of neoconservatism policies that were glamourized by President Ronald Reagan. 3.6.2.6 Jan Tinbergen 1903–1994 Another Nobel Prize winner on our list, Dutch economist, Jan Tinbergen was educated at Universiteit Leiden. Tinbergen is considered a pioneer in the field of econometrics, the

application of macroeconomic models to economic policy making, this remains the method by which economic research is applied today. His work is theoretical and does not question the validity of pre-existing economic models and fiscal policies. 3.6.2.7 John Forbes Nash, Jr. 1928–Present Even though Nash is not an economist, he was awarded Nobel Prize in Economics in 1994. It was for his pioneering work on game theory. This work provided a great tool of justification for the ‘Selfish man’ model. Islam et al. (2016a) have deconstructed the principles behind the game theory. This game theory has produced some Nobel laureates in Economics. This is no ordinary task, as economics is the driver of the modern era and Nobel prizes encapsulate the very best our society has to offer. It is, however, fashionable to criticize and critique Nobel laureates of peace and, frankly, there is little defense for that skepticism in a world that has seen Obama as Nobel peace prize winner and Hitler as a nominee. However, for economics, it’s rare to challenge a theory at the root, let alone dismissing it as spurious. Islam et al. (2016a, p. 209) state: Even though game theory was developed for economics, it has been extended to political science, psychology, logic, computer science, and biology. The theory is based on limited resource availability mediated by a competitive attitude among participants, (not unlike a so-called ‘hunger game.). Both are rooted in the Eurocentric narration of humanity as another variant of the struggle for existence elsewhere in the animal kingdom, a struggle devoid of conscience or ability to think conscientiously. Here the authors cannot help noting that such a human is as incapacitated (or worse) as the entity doomed—according to early Christian theology—by ‘original sin’. All the premises summoned to support such a presentation of everything essential about human beings are utterly devoid of any recognition of the realities of the natural order. Nature in these philosophical premises possesses limited resources, and the entire animal kingdom is engaged in a more–or-less meaningless struggle for survival, mired entirely in selfish short-term aims. Today, this is the essence of the game theory applied to a wide range of behavioral relations, a “theory” that has developed into an umbrella term for the logical side of decision science, equally applicable to humans and non-humans (e.g., computers, animals, etc.). ... Actual game theory—game theory in practice, so to speak—is anything but strategy. It is ruthless annihilation of the dissenting voices and a deliberate hunger game that kills only the opponents of the strategy. In various stages, it is the yin-and-yang of corporatization, imperialism, and mass indoctrination that undergirds this aphenomenal model. Game theory is summoned as justification as though it modeled a process that is entirely natural and based on ‘science’. Figure 3.31 places this model as a cancer (in social sense) that brings down humanity to the deepest side of indignity and ignorance. Once, Conscience and conscientious motive is replaced by desire, the spiraling down mode begins. The cancer cells gain momentum and fight every act of conscience/health/welfare/knowledge. This is the trajectory of the aphenomenal

model.

Figure 3.31 From ill intention and ill-gotten power comes the onset of the cancer model. It gains traction increasing misery of the general population upon which the ‘Aphenomenal Model’ is being applied. Figure 3.33 shows the way the aphenomenal model moves on to take control of the society and implements corruption (both tangible and intangible kinds) as the only option in a society. At the end, the game theory gives one an option of choosing among many options, all belonging to the same trajectory. Historically, such a model appeared and developed its main characteristics in lock-step with the elaboration of the death throes of British imperialism (especially in Palestine, western Asia and the Indian subcontinent). Modern imperialism is both an outlook and a general guideline for effecting political and economic domination of peoples at home and abroad. In the post-World War Two era, U.S. empire building has continued and further elaborated the originally British-developed model. Wherever the implication of this model as a scientifically insidious mechanism for obfuscating justice in every aspect of social development has broken through the surface of the normal social order, it has encountered massive resistance.

Figure 3.32 Every ‘-ism’ introduced in the modern age belongs to the same false premise that launched the current civilization in the deliberate hunger game model.

Figure 3.33 The great debate rages on mostly as a distraction from the real debate that should be on fundamental premises that are taken at face value. Not surprisingly, the greatest and most intense such resistance has come from all three of the establishments (political, media, and financial). Modern game theory proceeds from notions of mixed-strategy equilibria in two-person zerosum games and its proof by John von Neumann. Von Neumann’s original proof used Brouwer’s fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty. Actual game theory—game theory in practice, so to speak—is anything but strategy. It is ruthless annihilation of the dissenting voices and a deliberate hunger game that kills only the opponents of the strategy. In various

stages, it is the yin-and-yang of corporatization, imperialism, and mass indoctrination that undergirds this aphenomenal model. Game theory is summoned as justification, as though it modeled a process that is entirely natural and based on ‘science’. In reality, it’s a choice between two bad choices and more importantly it gives the impression that all options have been exhausted, thereby making sure that the phenomenal or knowledge-based option never surfaces. At the end what we are left with is an option of ‘agreeing to disagree’ or worse choose between two bad choices that give the illusion that we have a choice (Figure 3.33). Often, this might mean choosing between two corrupt parties, two toxic medications, or two unsustainable technologies. 3.6.2.8 Muhammad Yunus 1940–Present Even though Muhammad Yunus is most famous for his Nobel Peace Prize, his most notable academic work is in the domain of microlending to poorest of the communities. His fundamental premise was that very small loans had a disproportionately positive impact on a poor person, who would not even qualify for a loan from a bank because of the lack of any collateral. Loaning to poor people was a practice banks were reluctant to subscribe to before Yunus’ work. The microcredit was born, now widely used in developing countries and with the potential to alleviate poverty. Critics of microcredit argue that the practice leaves the world’s poor in a debt trap. Even though Grameen Bank touted giving out loans for as little as 0% interest, in reality many horror stories surfaced, one even during the week Yunus was being awarded the Nobel Peace Prize when his Grameen Bank security people were allegedly removing tinsheds from some villagers huts that couldn’t pay the 20% compound interest. Grameen Bank itself claims to charge an average of 16% interest that can go up to 20% at a time US Federal reserve set the interest rate at 1%. Other considerations are the introduction of hidden costs. For instance, clients may be required to put some of their borrowings into creditor-run savings accounts as a kind of collateral – “forced savings” – which reduces available credit without proportionally reducing interest payments and thus effectively raises the rate. Also, there are provisions up-front fees. In the case of Grameen, each member must buy a 100 taka (1.19 USD 2018) share of the Bank once her forced savings balance is high enough to pay for it. Although a Grameen member cannot get that 100 taka back until she quits the bank, but in recent years Grameen has paid handsome dividends on those shares – for instance, 100% of face value for 2006, 30% for 2009. On top of this, there is the false pretense of forcing people to save and factor that into a cost for the bank. Bairagi and Azzeddine (2014) used a recently developed stochastic frontier estimator of market power to determine that the Grameen Bank’s lending rates reflect a markup of about 3% above marginal cost. Overall, what microlending has created is a fundamental change that affects the poorest of the poors and funnel profits at the expense of the most hapless of the community. Such an economic model has justified institutional exploitation of the weak and the vulnerable. As Bairagi and Azzeddine (2014) pointed out, microlending financing institutions have instituted obscene rates that from 97% to 165% above marginal cost. Yunus’ microlending model that earned him a Nobel prize in peace has become the impersonation of the pyramid scheme, legal ponzi scheme, and old-

fashioned usury model, all packaged as ‘helping the poor’. In the word of Yunus himself, he “never imagined that one day microcredit would give rise to its own breed of loan sharks” (Yunus, 2013). Assefa et al. (2013) used a cost function to estimate the Lerner index for 362 commercial micro-financing institutions (MFIs) in 73 countries. They found that the average MFI in their sample ‘seems to enjoy quite some level of market power, enabling them to charge interest rates above marginal cost’ – the average Lerner index is 0.582 and ranges from 0.492 for MFIs in South Asia to 0.622 for MFIs in the Middle East and North Africa. The authors conclude that their results support the claim of those who consider “commercialization of the microfinance sector as a potential threat to its longer-term stability and success, especially in terms of its financial objectives.” So, what does Yunus propose as a remedy? He suggested capping interest rates and that “the ideal ‘spread’ between the cost of the fund [what the bank pays to procure the money to lend] and the lending rate should be close to 10 %” (Yunus, 2013). Thus, he lifts the bar to a level that would be considered entirely unacceptable were these recipients of loan from among the largest of corporations that are in no need of help. Aslanbeigui et al. (2010) point out that the empowerment owing to microlending by Grameen Bank is employed in this literature is rife with logical fallacies. In fact, the selection of women and providing them with employment becomes an impetus to the breakdown of the families that in turn disempowers the entire community. Such an effect cannot be reflected in a survey that lasts a few years. Empowerment is a concept that must include generational and intergenerational impacts. The problem is that, at present, no economic model exists that can account for assessing such impacts. However, it is intuitively known in conservative communities (for both Muslims and Hindus in the subcontinent and Christians in the west) that the breakdown of families can be risked in exchange of short-term gain. What we have actually seen, the duration of the period of gain is rather short and within years the person involved may start to suffer from the modus operandi called ‘shark lenders’ by none other than Muhammad Yunus. What Yunus’ model fails to capture is the role of the interest rate in modern economic system (Zatzman and Islam, 2007). Just like Nobel laureate economist Amartya Sen took western democracy as the standard and automatically assumed that such a “perfect standard” would lead to natural state of equilibrium no matter what disaster hits the economic system, Muhammad Yunus took the western model that puts interest rate as the driver’s seat as the standard. It is well known that the Federal reserve of United States changes the interest rates in order to achieve maximum employment, minimum inflation, and thus “optimizes” economic growth. The inherent assumption of this mindset is, low interest rate leads to more consumer spending that will then stimulate economic growth by increasing demand and subsequent supplies. Supplies are assumed to be invariably related to demand and if supplies cannot keep up with the demand, there will be an onset of inflation. This is because prices in the modern economy are not dictated by any natural standard. In fact, there is no natural pricing system that would relate prices to values or natural need. This might mean that goods have to imported from a distance adding cost of production. However, barring additional cost, shortage itself cannot spur inflation.

Another implicit assumption is that lowering interest rate will spur spending. This is true only in a society that glamourizes spending based on greed. If people were making purchases based on need, having extra money means more purchase, irrespective of the actual need. Zatzman and Islam (2007) called such basis of economic development ‘interest-driven’ economics as opposed to natural process that should be driven by conscience (hence intentionbased economy). Ever since the US dollar was decoupled from the gold reserve in the Nixon era, the US Federal reserve has been manipulating the interest rate in order to create a false paradigm in economic development. Monetary policy is decided by the Federal Open Market Committee (FOMC) that meets eight times each year. The committee bases its decision on economic and financial conditions that are entirely disconnected from the real asset and all based on perceptions and decides on monetary policy. However, monetary policy, which is the architect of US economy, is all about availability and cost of money and credit, without any consideration of real value or real product that involve real labour or true value addition through quality improvements. The policy involves setting short-term interest rate targets. The economic indicators that are used to justify any change in interest rates are Consumer Price Index (CPI) and the Producer Price Indexes (PPI). Therefore, these two form the basis for any ‘balancing’ of the economy through interest-rate adjustment. This process is inherently unbalanced and any talk of ‘balance’ really relates to a very short term, which in scientific term means ∆t → 0. In essence, this is the same model that is used in Newtonian mechanics. Figure 3.34 shows interest rates fixed by the Federal reserve for last 60 years. This process of manipulating interest rates attempt to achieve stability in employment rates, commodity prices, and economic growth.

Figure 3.34 Interest rate over the years (shaded area US recessions) data from Research. Stlouisfed.org. Figure 3.35 shows how inflation rates follow the interest rates with a time gap (with a negative correlation).

Figure 3.35 Historical fluctuation of inflation rates in USA (redrawn from Website 1). When the interest rate is reduced, it becomes more appealing for the general public to spend in buying goods that they could not afford before, spurring spending on non-essential goods, leading to price hike and eventual inflation. Of course, this is premised on there being limited supplies. Another aspect of this is the increase in corporate investments that now have more money available. Because most investments include employment, this leads to increase in employment and reduction in unemployment rates. This also will spur inflation. However, investments also lead to general production of goods and real estate, thereby spurring economic activities. Higher levels of supplies increase competition and help decrease prices of goods, thereby reducing inflation. If one considers longer-term consequences, an image more akin to Figure 3.36 emerges. This theoretical representation is difficult to prove with empirical data because the Federal reserve manipulates the interest rate before the market reaches equilibrium. However, in the case that 0% interest rate is invoked, forcing all the lenders to lend money based on profit sharing, all graphs collapses to 0, meaning zero inflation as depicted in Figure 3.36. To some extent, this scenario is captured in Figure 3.37 that shows any interest rate hike is followed by a drop in inflation after a certain time lapse.

Figure 3.36 Interest rate and inflation rate.

Figure 3.37 Interest rate and inflation rate over the years (from Federal Reserve Economic Data). This phenomenon can also be captured by the Phillips Curve12 in the short-term. As shown in Figure 3.38, as more money is available to the employers that benefit from employing labour force, unemployment rate drops, leading to more money in public hands. As stated earlier, this increased rate of employment then spurs inflation.

Figure 3.38 Philip’s short-run graph. One interesting feature of the Phillips curve is the asymptotic relationship between unemployment and inflation rate. Consider an inflation rate of 0%, meaning prices are unrelated to interest rates. It implies that as unemployment rate drops the inflation rate goes up exponentially. Here, the term ‘employed’ necessarily means work as part of the capitalist system for which labour force translates into accumulated wealth by the corporation. In addition, it is assumed that employment is necessarily linked to spending, thus increasing vulnerability to inflation rate. Even in a steady state, unemployment rate assumes a non-zero constant value. In summary, what Grameen bank is to economic sustainability, bioengineering is to food sustainability. They both introduce fundamentally toxic and inherently implosive models to claim sustainability and long-term good. 3.6.2.9 Warren Buffett 1930–Present

Although Warren Buffett is not an economist by trade, his formal education was in Economics. More importantly, he has become an icon of investment in the Information Age that transforms perception into hard cash. In this economics money is made without any production of goods. This, however, is not the most sinister aspect of this economic system. He personified the notion of financialization – the concept that is in the core of modern economics. Although many see this phenomenon as new, our analysis shows there is nothing new in this process. A further analysis will follow in latter sections. 3.6.2.10 Joseph Stiglitz (influenced by Keynes) Nobel laureate Economist, Joseph Stiglitz represents a new genre of economists that justified the spurious equilibrium model, depicted by Figures 3.29 and 3.30. His views are equivalent to neo-liberalism. As discussed earlier, this line of thinking gave rise to government intervenists that assign ultimate sanctity to the government that is upheld by the establishment. He is a recipient of the Nobel Memorial Prize in Economic Sciences (2001) and the John Bates Clark Medal (1979). He is a former senior vice president and chief economist of the World Bank and is a former member and chairman of the (US President’s) Council of Economic Advisers. He is known for his support of Georgist public finance theory and for his critical view of the management of globalization, of laissez-faire economists (whom he calls “free market fundamentalists”), and of international institutions such as the International Monetary Fund and the World Bank. 3.6.2.11 Paul Robin Krugman (influenced by Keynes, Stiglitz) Another Nobel laureate economist that has been a practical replica of the Stiglitz school. He is the mastermind of recent vitriol attack on President Trump and his policies. This is the same group that predicted doom’s day if Trump were to be elected. In reality, the opposite happened. A group of 20 Nobel Prize–winning economists warned last week that a Trump presidency could “jeopardize the foundations of American prosperity and the global economy.” Even before the elections, 370 economists, along with 8 Nobel laureates got together and jointly expressed concerns over how disastrous a Trump win would become, predicting stock prices would collapse, gold prices would skyrocket and America would become a laughing stock (Timiraros, 2016). When that prediction failed spectacularly (gold price falling by 10%, stocks rising to a record high, breaking the 20,000 ceiling), then the likes of Nobel Laureate economist, Joseph Stiglitz, continued the narrative, “There is a broad consensus that the kind of policies that our president-elect has proposed are among the polices that will not work.” In the mean time, other Nobel laureates weighed in as well (e.g. Nobel laureate Robert Shiller who stated that Donald Trump ‘could send America into the next Great Crash’). Paul Krugman continues his Crusade against the ‘right’, in particular President Trump. In his recent work in New York Times (Krugman, 2017a), titled; “Lies, Lies, Lies, Lies, Lies, Lies, Lies, Lies, Lies, he listed 10 inconsistencies in Trump’s statements.” They are listed along with deconstructions of their premises. Lie #1: America is the most highly-taxed country in the world

In refuting this ‘lie,’ Krugman pulls out the following figure from the OECD but ignores to mention that this figure lists tax as a percentage of GDP. He doesn’t stop there, where he continues to denigrate the President and anyone that supports him. In his words, “Why does Trump keep repeating what even he has to know by now is a flat lie? I suspect it’s a power thing: he enjoys showing that he can lie repeatedly through his teeth, be caught red-handed in his lie again and again, and his followers will still believe him rather than the ‘fake news’ media.” Yet, if he cared to do the calculation, it would clearly show US public is the by the biggest tax payers in the world in actual tax dollars. Of course, Krugman would not go there. A conscientious economist would actually go further and highlight who carries the burden of this tax and how the biggest profiteers manage to shelter all their earnings from taxes while maintaining an obscenely lavish lifestyle and tax-free hording of wealth. With such analysis, the other citations of ‘lies’ would become a triviality. It is to Krugman’s interest to keep the debate at a low-level name calling. Figure 3.39 is used in support of Lie#1.

Figure 3.39 Taxes as a percentage of GDP for various countries. The darker bar is the US; the darkest bar the average for advanced countries (OECD Report, 2017). Lie #2: The estate tax is destroying farmers and truckers Here, Krugman limits the debate to the lack of examples and conveniently cites the fact that “only a small number of very large estates pay any tax at all, and only a tiny fraction of those tax-paying estates are small businesses or family farms”. He gives no explanation as to what type of government would tax the less capable and let the more capable ones free. Figure 3.40 is used to deconstruct Lie #2.

Figure 3.40 Pictorial depiction estates with tax concerns (from Krugman, 2017). Lie #3: Taxation of pass-through entities is a burden on small business Here Krugman painstakingly lectures on what is a small business and how “their earnings are simply ‘passed through’: counted as part of their owners’ personal income and taxed accordingly”. Then he sensualizes the liberal agenda by shedding crocodile’s tear for the poor and excluding “doctors, lawyers, consultants, other professionals, and, at the very highest end, partners in hedge funds or other investment firms” from being a small business. He conveniently forgets the definition of small business. Once again, the problem of highest income people paying no tax is carefully avoided. Lie #4: Cutting profits taxes really benefits workers Krugman brings out this convoluted statement of Trump and packages it as a ‘lie’. However, Krugman presents no logical explanation as to why it is a lie or under what scenario Krugman’s own alternative is anything different from this statement. For instance, he wonders: Think about what happens if you cut the taxes on corporate profits. The immediate impact is that (duh) corporations have more money. Why would they spend that extra money on hiring more workers or increasing their wages? Not, surely, out of the goodness of their hearts – and not in response to worker demands, because these days nobody cares what workers think. Sarcasm and rhetoric aside, this narration would be the same under so-called socialist, bleeding-heart liberals. As we have seen in earlier sections, all liberal philosophies assume humans to be inherently selfish and therefore motivated by self interest in the shortest term, while maximizing pleasure and minimizing pain. So, corporations cannot be trusted to do anything for the labour force, who should be? It certainly cannot be the government, let alone a liberal government. Next, Krugman discusses several scenarios and carefully punches holes in them. However, he avoids any notion of good business or the mantra ‘doing good is good business’. Obviously, such notion would mean dedication to long-term because only in the long term this mantra can come true and by no means Krugman can contradict his idol, Keynes’ infamous dogma: In the

long run, we are all dead. Lie #5: Repatriating overseas profits will create jobs In asserting that this is a lie, Krugman argues that because surplus cash flow can be extracted at a near zero-interest loan, companies have no reason to bring in the money in cash tax amnesty was given. The reason that these companies do not bring in the cash to USA is not the anxiety of tax, but it is rather lack of perceived opportunity to invest in USA. There, Krugman stops and pleads ignorant to the possibility of creating perception of investment opportunities. Of course, this is to be expected. After all, the only investment opportunities Krugmans can envision is the making of money out of nothing without actually producing any tangible product. For that, certainly there is no reason for any company to bring cash to USA. Transforming intangible (e.g. information) into tangible (e.g. gold) is only a matter of perception that can be created at will by the Fed as well as the financial institutions. Krugman then presents empirical evidence from 2004, when the US enacted the Homeland Investment Act, which offered a tax holiday for repatriation of foreign earnings by U.S. multinationals. He claims that repatriations did not lead to an in increase in domestic investment, employment or R&D — even for the firms that lobbied for the tax holiday stating these intentions and for firms that appeared to be financially constrained. Instead, a $1 increase in repatriations was associated with an increase of almost $1 in payouts to shareholders. Why is Krugman surprised that shareholders are the only ones that benefited from the repatriation? What other function today’s multinationals have than maximizing quarterly dividends? Why should they be interested in job creation or other labour-intensive investment opportunities when 10-fold cheaper labour is available overseas and the profit margin lies within perception and other intangibles and not in production of goods? Lie #6: This is not a tax cut for the rich Trump says it isn not, so that’s that, right? Oh, wait. Krugman makes a big deal out of this ambiguous statement without defining what is rich or what legitimacy any state has for giving tax cut. History is clear as to what prompts political actions and it certainly for the love of the poor or even the middle class. Lie #7: It’s a big tax cut for the middle class It is the same argument about Lie # 6. Both parties speak the same language and all economists subscribe to the same Establishment that sponsor both parties. Lie #8: It won’t increase the deficit Krugman presumes that downsizing the government or making government more efficient is out of the question. As usual, the only conscientious option does not strike Krugman’s intrigue. Lie #9: Cutting taxes will jump-start rapid growth Nothing jumpstarts growth unless such scheme is based on fundamentally sound financial system – something that has not been in place in the USA in modern history. As we discussed earlier, a sound fundamental system requires conscientious starting point. For Krugman and

modern economists of the right and the left, this is not an option. Lie #10: Tax cuts will pay for themselves Once again, Krugman picks on trivialities and explains nothing about under what circumstances tax cuts can actually pay for themselves. This brings to the question of what role the government should play in a free market economy.

3.6.3 The Role of Government As usual, there is always a theory that is proposed to account for anything. In this particular case, people cite the Rahn curve (Figure 3.41). The peak of the Rahn Curve is the optimal or “growth maximizing” size of government, relative to the overall economy.

Figure 3.41 The Rahn curve. Recently Gill and Martin (2012) discovered a correlation between government size and economic growth for various countries in Europe and around the world. In line with the Rahn curve (Figure 3.41), they reported that government spending reduces growth, “particularly when it exceeds 40 percent of GDP”. Of course, their samples had Perhaps because governments are smaller outside Europe, there is no evidence that government size generally harms growth in the global sample. Independent of the Gill and Martin study, Bergh and Henrekson (2011) came to similar conclusion based on a survey of government size and economic growth in wealthy countries. They found that an increase in government size by 10 percentage points is associated with a 0.5 to 1 percent lower annual growth rate. Other recent reviews of the literature on military spending and government transfers (Awaworyi and Yew, 2014) – two of the largest budget categories in the United States and other western countries — found that higher levels of each are associated with slower economic growth. This is surprising because the European economic growth has been synonymous with military spending for centuries. Acemoglu and Robinson (2012), in their bestselling book: Why Nations Fail: The Origins of Power, Prosperity, and Poverty, rehashed the original theory of Ibn Khaldun, except they included Ibn Khaldun’s Caliphate model, which is usually excluded in any modern literature, and conflated with a ‘accountable’ and ‘responsive’ government. The inherent assumption that these authors work with is Democracy leads to the emergence of such government. This line of

conclusion was also touted by Nobel Laureate economist Amartya Sen, for whom Democracy leads to a natural equilibrium state that is synonymous with economic euphoria. Acemoglu and Robinson (2012) go a step further in justifying such conclusion with the game theory. In support of their theory, they cite historical evidence from the Roman Empire, the Mayan city-states, medieval Venice, the Soviet Union, Latin America, England, Europe, the United States, and Africa. Curiously but not surprisingly, these authors leave out the entire Muslim world, including the Rashidun Caliphate – the model of governance as per Ibn Khaldun – the father of sociology. The 1000 years that these authors missed in their analysis was the golden era of the Muslim world. According to Islam et al. (2013, p. 337): The rise of the Abbasid Caliphate saw an era that Europeans have characterized as “Islamic golden era”. Even the sympathizers of Islam characterize this period as having unprecedented progress in human history. They reconcile this progress as bearing fruit of the seed planted by Prophet Muhammad, whose epoch is considered ‘standard’ for the rest of the world until the Day of Judgment. These days, big government is synonymous with power and corruption at the same time. In making Orwellian prediction of the Culture of Doom come true, the most powerful nations are also perpetrating the Culture of ‘deep state’. Everyday there is news of government cover-up and yet another episode of scandal in the financial market. In this, government has lost all respect and trust from its people and people have in general more trust in Mafia and dacoits than they have in individual governments. The sequence continues as UN assumed the role of World’s biggest government, acting as the goon of only select group of countries. This truly undemocratic body personifies what the problem is in this world. It turns out that the UN mockery of its name would be at its worst in the year that Nobel Peace price was awarded to UN for the first time in history. The world did not see a day of Peace since. Home to the world’s second-largest population, the country fared poorly on the Global Hunger Index (GHI) for 2017 released by the Washington-based International Food Policy Research Institute (IFPRI) on Thursday (Oct. 12). India ranked 100 out of 119 countries on the GHI, worse than last year’s position of 97 out of 118. A lower ranking is indicative of a higher rate of malnutrition and hunger. Tandon, 2017. In addition, 9 of the 13 countries that lack sufficient data for calculating 2017 GHI score – including Somalia, South Sudan, and Syria – raise significant concern, and in fact may have some of the highest levels of hunger. Both Nigeria and Yemen fall under ‘extremely alarming’ category. Ever since Adam Smith entitled his book The Wealth of Nations, economists have tried to explain why some countries are so much richer than others. One important channel, discussed in Acemoglu and Robinson’s new book Why Nations Fail, is that outsiders come in and impose new rules of the game. Sometimes they impose good rules, sometimes bad. The question then arises: Do American troops help or hinder economic growth in other countries? This is not a question that can be answered without resorting to the history of European economic history that has flourished whenever governments engaged in excessive spending, often in disguise of war. In addition, one must look at the mitigating factors that

control the flow of information. Recently Worstall (2012) pointed out these factors in the context of challenging the mainstream narration that war brings in boon to the economy. He argued that the effects might be more to do with what the presence of the US Military prevents from happening rather than anything positive the US Military causes to happen. He questions the core of the premise attach to the analysis that was for the 1950–1990 period. This is the period, he reminds the readership that a country’s economy was going to perform badly was embedded in the fact that it was Communist ideology that was being imposed on a postwar regime. This is a no-brainer. He also points out that during the same period “industry protecting, import substitution anti-economic colonialism” that were followed in many Africa and Latin American countries over the same period. So, in essence, the US Military was salvaging the wreck that was about to happen. It is then a no-brainer that any country with a large military force would prevent the economic collapse that was in the making. Translation: The economic boost created by the presence of the US military has nothing to do with actual economic progress. In 2011, Abrams and Wang presented a remarkable paper. This paper examines the dynamic effects of government outlays on economic growth and the unemployment rate. Using vector autoregression and data from twenty OECD countries over three recent decades, we found: (1) positive shocks to government outlays slow down economic growth and raise the unemployment rate; (2) different types of government outlays have different effects on growth and unemployment, with transfers and subsidies having a larger effect than government purchases; (3) causality runs oneway from government outlays to economic growth and the unemployment rate; (4) the above results are not sensitive to how government outlays are financed. But that’s not all: several other studies also suggest that countries with relatively large governments tend to experience higher rates of unemployment, and, of course, high unemployment results in less output than if more people were working. One paper by economists at the University of Delaware’s Department of Economics found that “positive shocks to government outlays slow down economic growth and raise the unemployment rate.” Likewise, research from the International Monetary Fund found that increasing public hiring — a popular prescription to alleviate unemployment since at least the New Deal — fully crowds out private employment and consequently reduces economic growth, while incurring substantial fiscal costs to a diminished tax base. In developed countries, limiting the size of government can help the economy grow faster. Studies explicitly looking at the non-linear relationship between government size and growth typically find that government spending hurts growth after it exceeds 25–30 percent of GDP. One paper found that over three-quarters of developed countries have levels of government spending that are detrimental to their growth. The author concluded that, “The concern about large governments is not misplaced. Ever-expanding governments will have negative effects on long-run growth.” Based on an extensive body of research economist Dan Mitchell has proposed a Golden Rule

for fiscal policy: government spending should grow slower than the economy itself. Such a rule would effectively ensure that government doesn’t outpace (and eventually swamp) the private sector. Fueled by the exponential growth in taxation, the US government has seen a growth that has become so out of control that many see the government as the biggest threat to public benevolence. As ironic as it sounds, it is not unexpected. As early as 1990, Higgs asked the pointed question: “Where once Americans viewed the powers of government as properly limited and the rights of individuals as primary and natural, Americans now view the powers of government as properly unlimited and the rights of individuals as subordinate to the pursuit of any declared ‘public policy.’ How did so many activities once viewed as ‘not the proper business of government’ come to be undertaken by governments and accepted as legitimate?” Needless to say, this is in sharp contrast to what founding fathers envisioned, as stated by Higgs (1990): “Our nation was founded by men who believed in limited government, especially limited central government. They were not anarchists; nor did they espouse laissez faire. But they did believe that rulers ought to be restrained and accountable to the people they govern. If the founders could see what has happened to the relation between the citizens and the government in the United States during the past two centuries, they would be appalled.” From an economic perspective, such extra ordinary growth in government has been supported by obscene growth in taxation (Figure 3.42). Before President Reagan’s time, an increase in taxes coincided with a national crisis. It is no longer the case and it has indeed become the work of a ‘hidden hand’ that dictates the plundering of public wealth. It was not like this before. Events such as civil war, World War I, World War II were the driving force for excessive government expenditure. It was during the civil war that the income tax protocols were conceived, albeit as a temporary measure. Even though income tax was ratified through a constitutional amendment in 2013, until today no moral justification has been given for such seemingly unconstitutional measure. Of course, the Great Depression was the first event that created a crisis driven by financial greed13.

Figure 3.42 Taxation is part and parcel of the government growth. A number of myths have been perpetrated in terms of government spending, taxation, and debt accumulation. Before WWII, federal, state, and local governments took in revenues equal to 6 to 7 percent of the gross national product (GNP). This number rose quickly after the 2nd world

war to 24 percent of GNP in 1950, then continued to rise beyond 30 percent of GNP in 1990s (Higgs, 1990). The trend continues keeping pace with increasing national debt. In this process, many appear to believe that taxes were cut during 1980s during Reagan era. This is not factual. The Reagan era is eventful for a various reasons but tax reduction was not one of them. Many also erroneously claim that Reagan era was marked by reduction in government spending. Once again, government spending that was six to seven percent of the GNP continued to rise past 20% by 1950 and remained steady around 34% in following years. The so-called Reagan Revolution of fiscal policy was but a gimmick that was built on building up government debt. A similar role was assumed by the US government in terms of government employment. Early in the 20th century, federal, state, and local governments employed about 4 percent of the civilian labor force. By 1950, government employment had risen to about 10 percent (Higgs, 1990). Ever since, government employment rose and fell: it reached a peak in the mid-1970s at nearly 16 percent, then fell to its present level of roughly 14 percent—that is, one worker in every seven. A gloomy picture emerges if one considers the fact that employment numbers are not reflective of the size of the government. Typically, a good portion of the government employees are contractors that are listed as members of the private labour force. This portion has skyrocketed since the Invasion of Iraq in 1990 and subsequent military actions worldwide. The defense industry is particularly a culprit in this equation. Even with all these mitigating actions, the number of government employees exceeded the number of employees in the manufacturing sector in 1989 and the difference between the two sectors continue to increase. There were 21,995,000 individuals employed by federal, state and local government in the United States in August, 2015. By contrast, there were only 12,329,000 employed in the manufacturing sector (Jeffrey, 2015). The BLS (Bureau of Labor Statistics) has published seasonally-adjusted month-by-month employment numbers for both government and manufacturing going back to 1939 (Figure 3.43). As can be seen in the figure, as of 1989, government overtook manufacturing as a U.S. employer. Since then, government employment has increased 4,006,000 and manufacturing employment has declined 5,635,000 (Jeffrey, 2015). After the 9/11 terror attack, government employment received a boost that was reflected in the increase in the slope. The creation of a number of new departments can be attributed to that rise. Even though the slope somewhat decreased during the Obama era, the defense department expenses did not. It really means, more contracts were given out the private sector that skewed the graph. One stunning discovery is that more Americans were employed in manufacturing in 1941 in the months leading up to the Japanese attack on Pearl Harbor than are employed in manufacturing in the United States in 2015. The 4,821,000 people employed by government in August 1941 equalled 1 for each 27.7 people in the overall population of 133,402,471. This compares to 21,995,000 employed by government in August 2015 that equalled 1 for each 14.6 people in the overall population of 321,191,461 (Jeffrey, 2015).

Figure 3.43 History of government employment and manufacturing employment (From Jeffrey, 2015). It is often stated that conservative values support the libertarian notion that the government size should be minimized (to national security, defence, etc.) and that liberal values support the notion that the government size should be expanded (in order to ensure equitable social insurance programs, welfare programs or agricultural subsidies). However, US history does not support this notion. The size of the government rose continuously irrespective of which party garnered power over presidency or congress. In terms of government debt, however, an interesting picture emerges. Until Reagan, every administration endeavoured to reduce the national debt. However, after Reagan started his so-called Reaganomics, US federal debt has been increasing overall irrespective of which party occupies the Oval office. In this process, it did not matter if there was a peace dividend (Clinton era) or war dividend (post 9/11 Bush and Obama era). Figure 3.44 shows the growth of the government has been a bipartisan reality.

Figure 3.44 Government size or government debt hasn’t been a partisan issue.

3.7 The Transition of Robotization In the dawn of the Information Age, there was a fundamental change in the composition of commodity futures market participants. Traditionally, the market was dominated by specialized investors who would earn a risk premium by providing insurance to short hedging commodity producers and long-hedging commodity processors (Keynes, 1930; Hicks, 1939). As we have seen elsewhere, the process here is to stimulate accumulation of wealth without labour (or strictly speaking without valued production that drives the social aspect of economics). This process sees the Information age as an opportunity to maximize profit, thus commodifying information (Zatzman and Islam, 2007). Not surprisingly, at the dawn of the Information Age, commodity investments began to grow at an unprecedented rate and are reported to have increased from $15 billion in 2003 to $250 billion in 2009 (Irwin and Sanders, 2011). It was not because quality products were being produced at an unprecedented rate, but it was rather false value created by manipulating perception. Officially, such vast inflows are mainly attributable to institutional investors that have historically never been engaged in commodity investments of such a large scale (Domanski and Heath, 2007). Figure 3.45 shows the transitions in the trading of futures.

Figure 3.45 Transition from real value to perceived value of commodities (data from Commodity Futures Trading Commission website).

Figure 3.45 shows how there has been drastic decrease in total agricultural production from 1970 to 2004. Precious metal mining has also shrunk in the market shares, whereas financial instruments that started off in 1980 have skyrocketed to capture vast majority of the market. It includes the growth in size and scope of finance and financial activity in our economy (which has doubled since the 1970s), to the rise of debt-fuelled speculation over productive lending, to the ascendancy of shareholder value as the sole model for corporate governance. Conservative estimates show that from 2000 to 2010 the number of commodity index traders, i.e. “long-term” only investors such as pension funds and insurance companies, more than quadrupled and the number of hedge funds more than tripled (Adams and Glück, 2015). In contrast, during the same time period, the number of traders engaged in futures markets to hedge commodity price risk less than doubled (Cheng et al., 2014). Such a trend is troubling for two reasons: (i) It converts perception into ‘real asset’; then (ii) it channels this ‘real asset’, mixed with an actual, real asset to a closed loops of financial movement that only benefits the banking system without any contribution to the real economic system. As Adams and Glück (2015) pointed out, only around 15% of the money flowing from financial institutions actually makes its way into business investment. The rest gets moved around a closed financial loop, via the buying and selling of existing assets, like real estate, stocks, and bonds. They have no contribution to the betterment of the society or building of sustainable infrastructure, and literally close the loop of Money from nothing to Money that multiplying itself without any involvement of real asset. This process creates a cycle that increases inequality, since the top quarter of the population owns the vast majority of those assets. This process is often characterized as ‘financialization’ – a phenomenon that has been recognized during the development of financial capitalism after the Reagan era. During the nearly 40 years following the onset in 1980, debt-to-equity ratios have increased and financial services accounted for an increasing share of national income relative to other sectors. Few have recognized this process as an aberration of core values of capitalism and even fewer have suggested any meaningful remedy. At the end, the above figure is created, leaving the rest of the science of economics to the debate if this equilibrium a bubble or a perfect equilibrium. An excellent manifestation of such outcome was captured in 2013, when the economist Robert J. Shiller shared the Nobel prize with Eugene Fama, while subscribing to two opposite view on the process of financial market (Applebaum, 2013). Figure 3.46 shows how reality is trashed while promoting falsehood as the standard of the truth, thus creating a paradigm shift in the wrong direction.

Figure 3.46 Falsehood is turned into truth and ensuing disinformation makes sure that truth does not come back in subsequent calculations. Figure 3.47 shows how this deceptive process works. First, the connection between real values and real knowledge is removed. For economics, it would be removing the role of intention or conscience from the economic analysis. Then, spurious values are assigned to false perception of real knowledge. In this process, opacity is an asset and any process that can potentially reveal the opaque nature of the science of economics is shunned. Instead, mathematics that would glamourize the process of opacity is celebrated. In essence, the bottom left quadrant is brought to the right top quadrant and more the opacity more becomes the profit margin. Once this ‘fact’ is established, it becomes a matter of race to get to the most opaque solution and touting it as a progress, often branding it as a breakthrough discovery. This process of scientific disinformation amounts to taking the left quadrant with spurious values and turn it around to the top right quadrant, thereby promoting the implosive disastrous model as the ‘knowledge model’ (See Figure 1.1 for depiction of knowledge model vs. aphenomenal model). The most stunning outcome of this process of mechanization is the robotization of humans. Metaphorically depicted in Figure 3.48, it involves false and illogical assertion continuous transition from ape to humans, essentially disconnecting conscience or the fundamental trait of humanity from humanity, then set robot or virtual reality as the ultimate model for humanity.

Figure 3.47 A new paradigm is invoked after denominating spurious value as real and disinformation into ‘real knowledge’.

Figure 3.48 How falsehood is promoted as truth and vice-versa. Note that all Nobel prize-winning economic models strictly follow this modus operandi. What has been identified elsewhere as the HSS®A® (Honey → Sugar → Saccharin®→ Aspartame®) degradation or syndrome, continues to unfold attacks against both the increasing global striving toward true sustainability on the one hand, and the humanization of the environment in all aspects, societal and natural, on the other. Its silent partner is the aphenomenal model, which invents justifications for the unjustifiable and for “phenomena” that have been picked out of thin air (more précised ‘out of nowhere’). As with the aphenomenal model, repeated and continual detection and exposure of the operation of the HSS®A® principle is crucial for future progress in developing true sustainability. Because Economics is the driver of today’s civilization, it is that much more important to identify this disinformation in economic models.

The above model is instrumental in turning any economic process or chain of processes that are accountable, manageable and effective in matching real supply and real demand economic model into a model that is entirely perception-based. To the extent that these economic processes also drive the relevant engineering applications and management that come in their train, such a path ultimately “closes the loop” of a generally unsustainable mode of technology development overall. In our recent work (Islam et al., 2003; Khan and Islam, 2016) we have identified such degradation in all aspects of our current civilization. Table 3.9 summarizes HSSA for certain disciplines, including Economics. Table 3.9 The HSS®A® pathway and its outcome in various disciplines. Natural state

First stage of intervention Sugar Doctrinal teaching Religion New Science

Second stage of intervention Third stage of intervention Honey Saccharin” Aspartame” Education Formal education Computer-based learning Science Fundamentalism Cult Science and natureEngineering Computer-based based technology design Gold standard economy Coins (non-gold Paper money (disconnected Promissory note or silver) from gold reserve) (electronic) The HSS®A® degradation in economics involves several levels of control. The role of interest rates has been discussed already. Next, wars are used to stimulate the economy. This is equivalent of forcing government spending thereby giving the economy a short-term boost. This is part of the process that renders a natural process into an artificial one. The most recent development in the HSS®A® degregation process has been toward Nothingness (N), making the degradation transformed from HSSA to HSSAN. This is a journey from intangible of real (Nature that created honey) to false intangible (Nothing).The only way this trend can be reverted to is recognizing the pathway of falsehood. The natural process of economics involved a gold standard that was first perverted by the British. In 1257, Great Britain set the price of an ounce of gold at £.89. While there was no problem with setting a currency value tagged to gold and was in commensurate with ‘promissory note’ that had been in use by even the Medieval Islamic empires, fixing such a price to a non-redeemable currency is the core of the problem. Britain’s perversion of the original understanding of the gold standard did not stop at the original price fixing. The government raised the price by about £1 each century, as follows:

1351 – £1.34 1465 – £2.01 1546 – £3.02 1664 – £4.05 1717 – £4.25 Other countries followed suit. In the 1800s, most countries printed paper currencies that were supported by their values in gold. Countries had to keep enough gold reserves to support this value. Great Britain kept gold at £4.25 an ounce until the 1944 Bretton-Woods Agreement. That is when most developed countries agreed to fix their currencies against the U.S. dollar since the United States owned 75% of the world’s gold. The United States used the British gold standard until 1791 when it set the price of gold at $19.49. In 1834, it raised it to $20.69. The Gold Standard Act of 1900 lowered it slightly to $20.67. It also established gold, instead of silver, as the only metal that backed paper currency. The most fundamental of US contributions to modern economics was the intervention of the Federal Reserve, which first raised the interest rate in August of 1929. Recession and Depression followed suit. The price of gold went from $20.67 an ounce in 1929 to $35 an ounce in 1934. The Federal Reserve was trying to maintain the gold standard. That helped cause The Great Depression. At the outset, there seems to be nothing unusual or toxic about the intervention of U.S. Federal Reserve in the economics system, but it once we review the theory of money and wealth, the picture becomes clear. Mises (1912) wrote: The excellence of the gold standard is to be seen in the fact that it renders the determination of the monetary unit’s purchasing power independent of the policies of governments and political parties. Furthermore, it prevents rulers from eluding the financial and budgetary prerogatives of the representative assemblies. Parliamentary control of finances works only if the government is not in a position to provide for unauthorized expenditures by increasing the circulating amount of fiat money. Viewed in this light, the gold standard appears as an indispensable implement of the body of constitutional guarantees that make the system of representative government function. (p. 416) What Mises feared as the outcome of manipulating gold standard actually manifested through U.S. Federal Reserve’s intervention. The fear was that government policies would manipulate policies regarding fiat currency, thus making the government non-representative of the general public. In the case of USA, this intervention has been caused by the Federal Reserve, who later doubled down by manipulating government policies in disconnecting gold standard from fiat currency altogether. This manipulation continued with heavy toll on U.S. and world economy. In the 2009 edition of Mises’ book, Douglas E. French points out in the Foreword (Mises, 1912, 2009 edition),

“It was Ludwig von Mises, as Murray Rothbard wrote in Economics of Depressions: Their Cause and Cure, who “developed hints of his solutions to the vital problem of the business cycle in his monumental Theory of Money and Credit, published in 1912, and still nearly 60 years later, the best book on the theory of money and banking. (p. 2)” Because the standard of real value is removed from gold, even before gold was decoupled from the US dollar during the Nixon era, there were fluctuations in inflation rate that are unrelated to market supply and demand. We have already seen the role of interest rate. Figure 3.49 shows how inflation rates fluctuated over the last 100 years. The highest inflation rate was recorded in 1918. Only for 12 years that the inflation rate was negative. Its minimum was in 1921 at – 10.5%. Inflation was highest overall during World War I (1914–1918). Inflation was low during the prosperous Twenties and was negative during the Great Depression (1929–1939). Inflation was high during World War II (1939–1945) as well as immediately thereafter, reaching a local maximum of 14.4% in 1947. Similarly, the Korean War (1950– 1953) was associated with another upspike in inflation. During the US involvement in Vietnam War (1955–1965), inflation rose again and became a national issue in the early Seventies. President Nixon attempted to freeze prices and wages with limited success. Inflation shot up twice in response to the two oil crises (1973–74) and (1979–80), peaking at 11.0% in 1974 and 13.5% in 1980. Inflation moderated during the Eighties but rose during the Bush Administration, partly in response to the Persian Gulf War. Inflation has declined through the Nineties in much celebrated redemption of Cold war dividends. In 1998, it was 1.6%, the lowest level in 35 years (since the early 1960’s). The last time inflation was negative was 1955.

Figure 3.49 Inflation rate and world events. The stock and commodity market has become a convenient tool for assigning perception-based values to real products/commodities. The original notion of stock market dates back to 1602 when the Dutch East India Company officially became the world’s first publically-traded company when it released shares of the company on the Amsterdam Stock Exchange. Stocks and bonds were issued to investors and each investor was entitled to a fixed percentage of East India Company’s profits. The creation of New York Stock Exchange (NYSE) in 1817 revolutionized the business of making money out of “nothing”. It is the crash of this stock market in 1929 that plunged the United States into its longest, deepest economic crisis of its history. Even though it is too simplistic to view the stock market crash as the single cause of

the Great Depression, the very notion of stock market was the beginning of the process. The effects of the Great Depression were felt across the world. It is no exaggeration to say that the ultimate price of the Great Depression was paid through the rise of extremism in Germany leading to World War II. One of the most prominent outcomes of the Great Depression is the collapse of over 9000 banks in 1930s. Just like cancer cells do not pay the price for the onset of cancer (it is rather the patient that dies), bank failures simply resulted in general public losing their savings. The fear of such loss triggers reduction in purchase, leading to general shrinking of the economy. Coupled with it is the loss of jobs that snowballs the problem. In order to compensate for such loss, whenever the government intervenes to salvage the financial institutions, once again it is the general public that foots the bills. One of the convenient tools of manipulation of the economy is the ability to float gold in the stock market. Scientifically, gold is the standard (Table 3.9). As clearly seen from Figure 3.50, gold enjoyed steady value for centuries. The United States used the British gold standard until 1791 when it set the price of gold at $19.49. The first spike in gold price came only after the introduction of the New York Stock Exchange that was established in 1817. It was the beginning of the perception based economic manipulation. In 1834, the United States raised the gold price to $20.69 (Amadeo, 2017). It was not until 1900 that the Gold Standard Act lowered the gold price to $20.67. At the same time, paper currency was legislated to be backed by gold. Showing the incompatibility of the changing gold price, fixed interest rate, and perception-based value determination through stock exchange, recession began in August 1929, after the Federal Reserve raised interest rates in 1928. This was quickly followed by the 1929 stock market crash that spread widespread fear of losing the capital through devaluation of paper currency. In order to manipulate the market, the Fed raised the interest rate, thus artificially lifting the value of the dollar while suppressing the real value of gold. This artificial maneuvering kept dollars more valuable than gold until 1931. However, as discussed earlier, higher interest rates and the subsequent injection of artificial financial system, many companies went out of business. Higher interest rates made loans too expensive, forcing many companies out of business. Then, the artificial deflation of the dollar was created, believing that a stronger dollar could buy more with less. Companies cut costs to keep prices low and remain competitive. That further worsened unemployment, turning the recession into a depression – an inevitable outcome of the interest-driven spiraling down mode. As more people resorted to turning paper money into gold, the gold price went up, leading to hoarding of gold, driving the price of gold even higher (beyond the simple supply and demand principle). President Roosevelt interfered and outlawed private ownership of gold coins, bullion, and certificates in April 1933. Americans had to sell their gold to the Fed. In 1934, Congress passed the Gold Reserve Act. It prohibited private ownership of gold in the United States. It also allowed President Roosevelt to raise the price of gold to $35 an ounce. This lowered the dollar value, triggering inflation (Officer and Williamson, 2013). Discconnected from real worth (or gold standard), paper currency became a tool of manipulation of the economy. In 1937, President Roosevelt cut government spending to reduce the deficit, which reignited the Depression. By that time, the government stockpile of gold tripled to $12 billion.

It was held at the U.S. Bullion Reserves at Fort Knox, Kentucky and at the Federal Reserve Bank of New York (Liquat, 2009).

Figure 3.50 Gold prices throughout modern history in US $. (from Onlygold.com). Another dose of artificial boosting of the economy was injected in 1939, when President Roosevelt increased defense spending to prepare for World War II. The economy expanded, and at the same time, the Dust Bowl drought that had previously contributed to Great Depression ended. As a result, the Great Depression ended, but the economic infrastructure changed permanently away from the gold standard to artificial standard that could be manipulated by the government or the bank system. In 1944, the major powers negotiated the Bretton-Woods Agreement. That made the U.S. dollar the official global currency. As the US defended the price of gold at $35 an ounce artificially, the entire economic system created a hollow foundation, which would become apparent in the Nixon era. In 1971, President Nixon told the Fed to stop honoring the dollar’s value in gold. That meant that foreign central banks could no longer exchange their dollars for U.S. gold, essentially taking the dollar off the gold standard. What got triggered at this point is clear from Figure 3.50. Nixon first tried to deflate the dollar’s value in gold, by making it worth only 1/38 of an ounce of gold, then 1/42 of an ounce. In 1976, Nixon officially abandoned the gold standard altogether. Unhinged from the dollar, gold prices skyrocketed from $42 to $120 an ounce. Gold – the standard that should be the basis for measuring value of other goods and commodities – became the first one to fluctuate. This is equivalent to inserting ‘deliberate schizoprehnia’ a term that has been associated to maliciously promoting anti-conscience agenda (Islam et al., 2015). The rollercoaster ride continued. By 1980, traders had bid the price of gold to $594.92 as a hedge against double-digit inflation. At this point, the Fed ended

inflation with double-digit interest rates but caused a recession. Gold dropped to $410 an ounce and remained in that general trading range until 1996 when it dropped to $288 an ounce in response to steady economic growth. This is the fulfillment of the scenario depicted in Figure 3.46 (upside down) that created an inverse correlation between gold and economic growth. Gold prices reveal the true state of U.S. economic health. When today’s gold prices are high, that signals that the economy is not healthy. This is because investors buy gold as a protection from either an economic crisis or inflation. Low gold prices mean the economy is healthy – because investors have many other more profitable investments like stocks, bonds, or real estate. As such, gold prices have become detached from the real value and only reflect the beliefs of commodities traders. If the traders think the economy is doing poorly, they will buy more gold. If they think the economy is doing well, they will buy less gold. All of a sudden, perception is the one that is used to reverse the worth of gold. Subsequently, all economic disasters have shot up the gold price. For instance, the economic crisis after 9/11 terrorist attacks saw a sharp increase in gold price. Similarly, gold rose to $869.75 an ounce during the 2008 financial crisis. In September 2009, gold was trading near its all-time high of $1,032. At the time, the world was coming out of the 2008 financial crisis. Many thought economic growth would bounce back like it did following any other recession (Amadeo, 2017a). Instead, a high foreclosure rate in the United States and growing sovereign debt concerns revealed the true nature of the perception-based spurious economy. In February 2009, gold reached $1,000 an ounce for the first time ever. Many investors thought this meant gold was a good investment. The economic system had already been primed to reject any return to real-value based system, and economic pundits promoted the notion that no more than 10% of the investment should be in gold, thus channeling real assets into financial institutions that have no real asset to back up their financial status. As the recession ended in late 2009, the gold price continued to rise. The price of an ounce of gold hit an all-time record of $1,895 on September 5, 2011, in response to worries that the U.S. would default on its debt. Gold is a unique ‘commodity’. In a natural environment, gold is not a commodity, but rather a standard against which assets are measured. Today, the amount of gold stockpiled is 60 times greater than the amount mined each year. This insulates gold from being subject to supply and demand cycle. Thomas (2015) pointed out that 26 percent of supply of gold is from recycled gold, which makes gold a non-consumable good. Typically, when prices rise, so does the amount recycled. However, 10 percent of the gold supply go toward industry for various electrical/electronic devices. The rest is used for jewelry, coin collectors, and most significantly central banks. Figure 3.51 shows distribution among various uses of gold from the year 2014.

Figure 3.51 Various uses of gold (redrawn from Thomas, 2015). In attempting to explain gold price fluctuation, Amadeo (2018) listed recent fluctuations and correlated them with public perception. For instance, she points to September 5, 2011 peak in gold prices ($1,895 per ounce) that correlated with the public perception in Europe that was depressed due to a weak jobs report, Eurozone debt crisis, and lingering uncertainty around the U.S. debt ceiling crisis. The fear of short term translated into a 100% jump within two and a half years. This peak was followed by a spiraling down of gold prices. The trigger was when Commodities guru George Soros announced that gold was the “ultimate bubble” and was no longer a safe investment. George Soros himself reduced his holdings in real assets (such as gold, silver, oil, etc.). Silver was the first victim of a perception that was based on Soros’ announcement, falling 7.9 percent, its largest one-day drop in 30 years, to $39.38 an ounce. It had hit a 31-year high of $48.58 an ounce just one week earlier. Consider the total loss of ‘wealth’ just because of the spurious statement of Soros. The trend continued and during 2013, the price of gold fell more than 28% (from $165.17/oz. to $118.36/oz.) despite the fact that other assets, such as, real estate and equities each rose at double-digit rates. What we showed as the spurious driver in Figure 3.46 is personified in this decade. In keeping with the newfound role, Feds suddenly adopted policies that espouse inflation as something desirable, in sharp contrast to all previous policies. The then Fed Chairman Bernanke said December 18, 2013: “We are very committed to making sure that inflation does not stay too low …” Contrast that to February 5, 1981 of then Fed Chairman Paul Volcker, who said that the Fed had made a “commitment to a monetary policy consistent with reducing inflation …” (see Website 3 for more information). People that fail to recall these facts see gold price slide with some level of perplexity. Perplexity increases once one considers ‘supply and demand’ as the primary driver of gold price. For instance, during 2013 that saw such tremendous drop in gold price also saw rise in demand in gold, along with oil price and U.S. currency. Furthermore, volatility that the Fed has caused in emerging market economies by flooding the world with dollars at zero interest is another reason the price of gold should be rising, but it didn’t, it actually fell. The perplexing behavior of the gold market can be only explained through the ‘hidden hands’ of the Fed and the financial institutions that caused the economic system to become entirely artificial. The Fed announced a zero-rate policy with a time horizon, which caused huge capital flows into emerging market economies by hedge funds looking for yield. The inflows

caused disruption to the immature financial systems in those emerging markets. Then the moment the word “tapering” was concocted, all of the hedge funds headed for the exits at once. The Indian rupee, for example, which was trading around 53 rupees/dollar in May 2013, fell to 69 near the end of August, a 30% loss of value. Such behavior has stirred up new interest on the part of major international players (China, Russia) to have an alternative to the dollar as the world’s reserve currency. However, this process is so convoluted and so much is at stake that such move itself can create chaos in the artificial economy. The leverage and hypothecation, the modus operandi of Wall Street, London, and financiers worldwide have become covered under layers of opacity. One glaring example is the metal market, gold being just one of them. The paper gold market, the one that trades the ETFs such as the SPDR Gold Trust, is 92 times bigger than the physical supply of gold according to Tocqueville’s John Hathaway, who said in an interview with the Gold Report (2015): An indication of this artificial market is the fact that the synthetic market for gold—the COMEX, options, over-the-counter and LBMA—is traded hundreds of times more than the actual metal. Investors actually short gold by posting margin on the COMEX. That eventually drives the price of gold down without any physical gold changing hands. It manipulates the psychological market environment and then the high-frequency and algorithmic traders push the price smashing to the extreme and it all happens with no gold actually being sold. In layman’s terms, it means that there are some 100 holders the paper gold certificate issued for each ounce of gold; meaning that gold – a tangible asset – has some 100 times value in intangible form, which can only make sense if no one claims the tangible asset. This is the impersonation of the Disinformation model that saw the reversal of real asset into artificial assets and vice versa.

3.7.1 The Cause of the Great Depression: Inequality and Deregulation The 20th century is characterized, in part, by the number of lessons the modern world learnt from it. Relatively speaking, the 1900s consist of a wide number of global socio-economic events that mark the age of a globalized and integrated world order—especially in Europe and North America. Amongst these events, the second World War, its causes and effects are all amongst the key historical lessons learnt from our recent past: its destructive capability, economic turmoil, the sheer number of lives lost and ruined are all things we wish to avoid today. Yet, even after such history the modern world still has problems of refugees, war, economic downturn, and even nuclear threats. Using the great depression and its subsequent rise to Fascism and the second world war as a caselaw, Jaan Islam (2018a) argued that we must revisit our means of economic evaluation so as to prevent radical evil in the long term. The market crash of 1929 is believed by economists to the critical point in leading to the Great Depression (Hall and Ferguson 1998; Morgan 2003, 12). The crash consisted of initial shocks, followed by a bear market that would decrease in US share prices by 89% over the next three years or so (Atherton, 2009). Nevertheless, although both the theory of decreased supply, and monetary constrained explanations have been used to describe the crash in the first place, the

events leading up to the depression have undeniable features that cannot be explained alone by Maynard Keynes’ (2016 [1936], ch. 1) explanation of a decline in autonomous expenditure causing lack of investment, and hence, the onset of the depression. A major perspective with statistical correlation and logical reasoning is ignored (or not fully incorporated). That is: inequality and deregulation causing major bank failures, and the inability to correct them with feasible government policy—if these two are proven to have causality with events leading to the depression, this theory can be in-part confirmed. Firstly, it is true that easy credit, slowed industrial production, and other warning signs prompted a further weakening in the American economy. Deregulatory behaviour prior to the market crash lead to widening income inequality during rapid growth in the 1920s (Engerman and Gallman 2000, 303; Holcombe 1996, 196; Zatzman and Islam 2007, 26). Both this behaviour, and income inequality is fact; how about its correlation to the bank crisis? No doubt, economist David Moss (2010) displayed a high level of correlation between bank runs and income inequality of the last century. It can therefore be stated that deregulatory policies and increased income inequality combined both the fragility of lending (debt-deflation) and financial/loan burden on the increasingly poor population. Moss went so far as to say that “the patterns across American history are sufficiently striking that further investigation of possible connections seems merited.” For the purpose of viewing a general timeline, Figure 3.52 represents the decrease from panic in the US stock market marking the timeline of various economic decisions prior to and after the market crash.

Figure 3.52 The Dow Jones Industrial Average from the years 1928/9–1934, with annotations describing brief historical-economic events (modified from: Velauthapill 2009). The major competing theory of the great depression, the ‘monetary explanation’, which proposes that it was too much regulation following the depression (praised by modern economists, famously: Thomas E. Hall and David Ferguson 1998; Freedman and Schwartz 1963): US Government adherence to the golden standard,1 and the Government’s increase in

tax (1932) to deal with rising inequality are believed to primarily contribute to further worsen the US’—and hence Europe’s—economic power, furthering the depression. This both caused a ‘great contraction’ in the economy, and the untested hypothesis that providing liquidity to failing banks would stop the crises. There is no denying its potential role in accelerating or exaggerating the effects of the great depression, but it is neither mutually exclusive to, nor entirely acceptable as an explanation of the underlying cause of the depression. In fact, Friedman and Schwartz admitted their theory indeed did not explain the cause of the depression.2 Hence we can discard the theory—by virtue of even both economists’ dismissiveness of their own opinion—that the depression was not caused by the Federal Reserve. Hence, when decade later when the Reserve chair Ben Bernanke stated, “yes, we did it. we’re very sorry”, it was based on the inaccurate view that the Government caused the depression (Federal Reserve Board, 2002). Figure 3.52 shows the original crash in 1929, the series of banking crisis and the 1932 gold drain, followed by stopping of the crisis and depression recovery upon Franklin D. Roosevelt’s election and implementation of economic policies. Although this may have played a part in the acceleration of the reserve, it is inaccurate to condemn regulation or government intervention as the cause of the intervention, for the very fact—i.e. largely undisputed—that the inequality that placed a burden the working class of the United States caused the economic conditions for the fall. Furthermore, it was with government intervention, spending, and taxation that the depression finally was put to an end (Zatzman and Islam, 26–27). In summary, politically speaking, it was President Coolidge’s growth policies that caused the general indirect cause of the depression. Galbraith (2009, 26) summarizes a hypothesized rationale for Coolidge’s policies, stating that “the regulation of economic activity is without doubt the most inelegant and unrewarding of public endeavours”. Now that this is established, the question not remains, ‘in which ways did the depression contribute to the rise of fascism?’ In Germany, the depression consisted of massive unemployment (of up to 6 million individuals), famine and food insecurity for the poor population, and economic standstill (Aldcroft and Morewood 2013, 77–78). This relatively extreme change of fortune for the German people led to a perfect incentive to choose radical alternatives out of loss of hope in mainstream politics. Specifically, as Layton (2015, ch. 12) notes, the discontent and will for alternatives was “easily exploited by the Nazi’s type of political activity”. Amongst many techniques, the use of scapegoats (often Jews portrayed as evil “moneylenders”) to gain popularity and ‘charismatic’ leadership qualities allowed the Nazi party to excel in gaining support of the masses (US Army 1983, 32). Right-wing populism in favour of the NSDAP would accelerate thereon. Numerous commentators noted above state that it was specifically the difficulties arising from the well-known effects of the great depression (which we do not have detail to discuss). Others state that although the great depression provided an opportunity for the NSDAP to become mainstream, it was not—by virtue of the depression alone—turned into a ‘mass movement’ (Kolb 2004, 100),3 as other extreme parties in Germany did not achieve the same success. This fact consolidates the insufficiency, but the necessity, for the great depression in

influencing the opinions of the German people. It is clear, speaking in a situation with all other factors held constant, that it is likely the case that the second world war followed the following sequence of causality: first, US Government policies with led to the market downturn and great depression in 1929; second, the globalized negative effect of the depression caused physical misfortune and mental despair; third, the NSDAP seizes the opportunity, launches a propaganda campaign, and wins the German federal election in 1932; and finally, Hitler’s seizure of power and preparations for the invasion of Europe (1939). Not to make generalizations, but this is a widely-accepted sequence of events (in terms of the timeline and causality). It is important to highlight the causality relationship; specifically, the way in which highly ideologically-motivated movements (like the Nazis’) found their origins in material concerns. There are many important observations and points to make about the great depression, its paving the road for WWII—and the nature of contemporary economics itself; points which raise concerns and debate about the nature of economic policy today. Firstly, there is the observation of a “butterfly” effect: people of the world’s decisions and actions have the potential to greatly affect religious and political conditions based on economic policies. In fact, it is specifically economic decisions that carry an enormous amount of weight, as money serves as the fuel and basis for livelihood of people in the modern era, it controls energy supplies, war, and ideological and political opinion. The cause-and-effect relationship of the events in Germany in the 1930s serves as a perfect ideological model for this point. Secondly, one must note the effect of US economic policy and market behavior has affected the economic and physical wellbeing of millions on individuals: it gave rise to mainstream extremist, racist, religio-political ideologies (i.e. Fascism in Germany), and affected the condition of peace in Europe at the time. It is necessary to put extra emphasis especially on European/American economic government policies by considering the fact that it may have lasting effects on the world. Though predicting the exact effects of decisions made in the moment is not possible or practical, by considering mistakes, patterns, and observations made in history, it is more likely one will be able to deal with familiar problems in the future. Practically speaking, it is logical that one should think economic policy should be dealt with extreme caution and education. As an example, it could have been possible to mitigate or even pre-empt the 2008 recession if policy-makers considered the effects of their policies based on previous lessons—such as the great depression, in which easy credit was one of the indicators of the market collapse.4 If indeed the debt-deflation theory of Irvin Fisher (1933)5 and economic inequality theories have some truth, this would be the case for the recession. There is also another perspective on can view “history” as being a way out of the future—on a larger time-scale. It is impractical, and proven time and time again, to assume even a reasonable amount foresight surrounding economic decisions. The true sustainability of a given system relies upon reasonable predictability, but most importantly, balance. Without these two factors, it is entirely possible that ad-hoc systems based on loosely accepted values can turn unpredictably from incredibly ‘fortunate’ to the exact opposite. Mainstream economists since the great depression have lived for centuries with the perception that “there is no alternative”—that the only plausible option for economics is to balance between a highly

unpredictable and inherently unsustainable paths of ‘high inequality with lots of growth’, or ‘low inequality with no growth’—with the additional major shocks here and there. Yet, this belief in the non-existence of an alternative is potentially the only obstruction from finding it (Zatzman and Islam 2007).

3.8 Yellow Gold vs. Black Gold Oil and gold are both natural resources. Islam et al. (2010) as well as Zatzman (2012) have discussed how oil price is not dictated by supply and demand. In this paper, we discussed how gold price is decoupled from supply and demand constraints. However, unlike gold, oil is consumable and thus a commodity that drives the modern civilization. From natural economics standpoint, gold should not change its value as it is the standard. As we know, Shekel, a coin originally weighing 11.3 grams of gold, became a standard unit of measure in the Middle East in 1500 BC and took its place as the recognized standard medium of exchange for international trade (Ferri, 2013). Were natural pricing (e.g. no interest, inflation, or any other artificial manipulation of the economy) be allowed to prevail, the amount of goods that one gram of gold bought thousands of years ago would still about the same if bought today. The only exception would be the price change due to supply and demand constraints. Therefore, it is of interest to consider gold prices over historical period during which specific manipulation mechanisms were instituted. Figure 3.53 shows inflation adjusted price of gold during last few centuries. Note that manipulation of the economy had been going on since the acceptance of fixation of currency based on gold standard. However, the “roller coaster ride” relates to the time dollar was accepted as a world standard and even more intense fluctuation relates to the official decoupling of US dollar from gold standard that started in early 1970s.

Figure 3.53 Illustrates the long-term inflation adjusted price of gold based on 2011 dollars (data from MeasuringWorth.com, as reported by Ferri, 2013). It highlights the average $500 per ounce price over the 220 year period, which was passed through many times.

Figure 3.54 shows inflation adjusted oil price in US dollar amounts. Although dominated by a different set of constraints, oil price show intense fluctuations, similar to gold, after 1970’s decoupling of gold and US dollar. Also, in 1980s when a record gold price in inflation adjusted terms took hold, it coincided with a sharp rise in the price of oil

Figure 3.54 Inflation adjusted oil price for last 70 years. Figure 3.55 shows the ratio of gold price over oil price. This figure shows since 1970s the average ratio is around 15. However, the sharpest departure from this average has been in recent years in the post 2008 financial crisis era. In the pre-Nixon era, the ratio averaged around 20. Also, during this phase, there is no practical correlation between recession and sharp change in the gold/oil ratio. At the outset, it would appear that the rise in this ratio is due to the dramatic slide in crude rather than the strength of the gold price. However, with our discussion in previous sections, it becomes clear that this erratic behaviour is because of the detachment of pricing from real value of any product. We have indeed arrived at a time that everything is erratic just like what is expected for an artificial system that is inherently implosive. As summed up by Ferri (2013) in his 2013 article on gold,

Figure 3.55 Ratio of gold price (per ounce) over oil price (per barrel). Grey areas mark recession periods. “I like investments that pay dividends or interest, and those that grow in value over the inflation rate. Gold doesn’t fit into either of these categories. A bar of gold produces no income and never grows into two bars of gold. In fact, it costs money to own gold due to trading cost, storage costs, insurance and possibly management fees.” January 1980. Gold hits record high at $850 per ounce. High inflation was because of strong oil prices, Soviet intervention in Afghanistan and the impact of the Iranian revolution, which prompted investors to move into the metal (See Table 3.1 for details of inflation over the years). On September 10, 1864, Harper’s Weekly featured a cartoon about the stock and gold markets during the Civil War. The following excerpt from New York Times (website 4) describes how the financial institutions peddled influence and got control of the government by creating a currency that had a promissory note that was simply fraudulent. It also created the insanity of seeing the upside down model of economic development as the driver of ‘economic progress’.

Table 3.1 Inflation rates during last. 2017 2016 2015 2014

2.14% 1.26% 0.12% 1.62%

1986 1985 1984 1983

1.91% 3.55% 4.30% 3.22%

2013 1.47% 1982 6.16% 2012 2.07% 1981 10.35% 2011 3.16% 1980 13.58% 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987

1.64% -0.34 % 3.85% 2.85% 3.24% 3.39% 2.68% 2.27% 1.59% 2.83% 3.38% 2.19% 1.55% 2.34% 2.93% 2.81% 2.61% 2.95% 3.03% 4.25% 5.39% 4.83% 4.08% 3.66%

1979 1978 1977 1976 1975 1974 1973 1972 1971 1970 1969 1968 1967 1966 1965 1964 1963 1962 1961 1960 1959 1958 1957 1956

11.22% 7.62% 6.50% 5.76% 9.19% 11.03% 6.16% 3.27% 4.30% 5.84% 5.46% 4.27% 2.77% 3.01% 1.58% 1.28% 1.24% 1.20% 1.07% 1.46% 1.01% 2.74% 3.34% 1.52%

To finance the extraordinarily expensive Union war effort, Treasury Secretary Salmon P. Chase and Congress borrowed millions of dollars from a consortium of private American banks, reorganized the banking system, imposed a new set of internal taxes, printed paper currency (“greenbacks”) not backed by gold, and issued government bonds. These policies increased the self-interest of the financial community in the success of the Union cause, enhanced its influence in government affairs, and inaugurated a wartime economic boom. Those with money to invest looked to Wall Street, where margin buying (sometimes as low as 3% of a stock’s cash value) helped fuel a roaring bull market. In January 1862, the New York Tribune reported on the great excitement among investors, «The intense desire to buy almost any kind of securities amounted almost to insanity…. The oldest members of the Board cannot remember such a day of rampant speculation.» By the next year, stockbrokers were earning unprecedented commissions of $3000 weekly.

3.9 How Science and Economics Mimic the Same Aphenomenality Paralleling the idea of Linus Pauling (that ‘all chemicals are chemicals’), the Nobel Laureate in Economics, Joseph Stiglitz, has redefined the entire field and science of economics along the line of the notion that information is destiny. Such dogmas have proven especially harmful for health and quality of life in the developed world and for basic economic welfare in the developing countries of Africa, Asia and Latin America. (Godoy et al 2000). Scientists need to ask whether the assumption that ‘chemicals are chemicals’ is true. Every nation that fell, or was pushed, into the trap of chemical fertilizer use by agribusiness, is now searching for ways to escape its myriad problems. If money, or investment, is “destiny” why do we see repeated economic collapse in developing countries proportional to the money invested from developed donor countries? Figure 3.56 illustrates this. Following a term of service as the head of the World Bank, it was Prof. Stiglitz, in an August 2003 speech in Bangladesh, who stated that “the World Bank and IMF only serve the interest of developed countries”. Institutions in these countries overhauled their basic posture during the Kennedy Administration. Their policies came to be guided by the theories of “economic takeoff” (Rostow 1960), and reoriented and realigned in the closest possible collaboration with the United States’ Agency for International Development (AID) programs. From that point on, such an outcome was never in doubt. According to the U.S. motivational guru Brian Tracy, “today the greatest single source of wealth is between your ears”. Human beings, by their labour, are the source of all wealth, yet modern civilization equates wealth with reducing the human population. With the exceptions of the U.S. and Canada, where population increases are now attributable entirely to immigration while the effective birth

Figure 3.56 Net development (true GNP per capita, after subtracting foreign debt payments & re-exported profits of TNC’s, etc.) and net dependency for various countries.

3.9.1 Sources of Unsustainability In this section, we describe the root causes of the crises encountered in the information age. By addressing the root causes, change can be invoked for the long-term, avoiding cosmetic changes that offer only band-aid solutions. Scientifically, this is equivalent to re-examining the first premise of all theories and laws. If the first premise does not conform to natural laws, then the model is considered unreal (not just unrealistic) – dubbed an “aphenomenal model.” With these aphenomenal models, all subsequent decisions lead to outcomes that conflict with the stated “intended” outcomes. At present, such conflicts are explained either with doctrinal philosophies or with a declaration of paradox. Our analysis shows that doctrinal philosophy is aphenomenal science is the main reason for the current crisis that we are experiencing. The statement of a paradox helps us procrastinate in solving the problem, but it does nothing to solve the problem itself. Both of these states keep us squarely in what we call the Einstein box. (Albert Einstein famously said, “The thinking that got you into the problem, is not going to get you out.”) Instead, if the first premise of any theory or “law were to replaced with a phenomenal premise, does he mean to say “aphenomenal” here or “phenomenal”?, the subsequent cognition encounters no contradictions with the intended outcomes. The end results show how the current crises cannot only be arrested but how it can also be reversed. As a result, the entire process would revert from being unsustainable to sustainable. The proposed model that closes this vicious loop that involves this sequence is: unsustainable engineering → technological disaster → environmental calamity → financial collapse. This sequence has become exposed since the dawn of the Information Age.

Notes 1 For more ideological and not necessarily pragmatic reasons

2 This would prove to be quote convenient, as isthmus that Friedman and Schwartz would be forced to come to grips with the reality that inequality and growth-oriented economic models were in fact that cause. 3 This is considering the fact that there was more than a single right-wing populist party at the time: Ibid; Abelshauser et. al., German Industry and Global Enterprise: BASF: The History of a Company (Cambridge, Cambridge University Press, 2003), 246. 4 Of course, this was at least in part due to lobbying by large corporations and banks with predatory credit policies. Hypothetically, policy opposing increased regulation could have been more rationally considered if politicians were acting on an informed, unbiased, knowledge of the historically-proven dangers of not regulating credit lending. 5 This is not to claim Fisher’s formulation of debt-deflation theory is necessarily accurate. I simply identify the fact that a number of post-keynesian and other economists have discussed the matter in necessary detail, and that these theories could have been used to pre-empt the housing bubble collapse. 1 The Twenty-fifth Amendment (Amendment XXV) to the United States Constitution deals with succession to the Presidency and establishes procedures both for filling a vacancy in the office of the Vice President, as well as responding to Presidential disabilities. It supersedes the ambiguous wording of Article II, Section 1, Clause 6 of the Constitution, which does not expressly state whether the Vice President becomes the President or Acting President if the President dies, resigns, is removed from office or is otherwise unable to discharge the powers of the presidency. The Twenty-fifth Amendment was adopted on February 10, 1967. 2 Zero-waste is when every chemical used as well as every energy source used is natural, thereby leaving behind no product or by-product that is inherently foreign to nature. Zerowaste is the natural equivalent of Aristotlean version of economics model, metaphorically both representing ‘gold standard’ in economics or ‘honey’ standard in material engineering. See Khan and Islam (2016) for details. 3 North American Islamic scholar, Hamza Yusuf reportedly said, If Muslims from the 8th century would be dropped in Norway they would think it was the Caliphate of Umar ibn Abdul Aziz”. 4 Every medicine is all about ‘managing’ a disease or temporarily muting the external expressions, such symptoms. 5 The Gulf of Tonkin incident, also known as the USS Maddox incident, was an international confrontation that led to the United States engaging more directly in the Vietnam War. It involved either one or two separate confrontations involving North Vietnam and the United States in the waters of the Gulf of Tonkin. The original American report blamed North Vietnam for both incidents, but eventually became very controversial with widespread claims that either one or both incidents were false, and possibly deliberately so.

6 The subprime mortgage crisis itself was an irreversible economic repercussion in which three major financial institutions failed to repay loans, prompting the US Government inject US$236 billion into the American banking system (Mathiason, 2008). 7 Such false modelization is also true for Democracy – the driver of the western political system. As Sir Winston Churchill famously said, “Democracy is the worst form of government, except for all the others.” Yet, there is a great deal of romanticism surrounding democracy and democratic values and many people remain optimistic even though they are not quite sure what is the source of that optimism. Others resort to optimism just because pessimism is too depressing an idea, thus oscillating between dangerously risky optimism and depressive pessimism (Hecht, 2013). 8 Mark Twain clarified, “If you pick up a starving dog and make him prosperous, he will not bite you. This is the principal difference between a dog and a man”. 9 Whether or not the government Hobbes envisioned was meant to be tyrannical and/or totalitarian is debated. Nevertheless, the fact remains that power is vested in (favourably) one individual. 10 Hobbes is seen by many to be the “representative example of a ‘realist’ in international relations” (Lloyd and Sreedhar, 2014, Section 5). By realist, I mean the view that that states are in a state of nature, unitary actors, and who are interested in preserving their self interest. 11 This notion of fluctuations comes from Ibn Khaldun. While this is well known in the academic community, the fact that Ibn Khaldun linked this fluctuation to non-Shariah mode. The Shariah mode being unique to prophet Muhammad and his rightly guided Caliphs who were on the helm of Islamic empire for some 40 years during early 7th century. 12 Named after London School of Economics Professor, P.W. Phillips (1914–1975), who first presented the inverse relationship between inflation and unemployment rate as stable, effectively suggesting that with economic growth comes inflation, which in turn leads to more jobs and less unemployment. 13 Due to importance of this event a section is added on Great Depression

Chapter 4 State-of-the Art of Current Technology Development 4.1 Introduction The purpose of this chapter is to discuss the HSSA degradation phenomenon from a philosophical perspective. While the previous chapter looked at the downspiralling model currently undergoing in western economics, this chapter examines the underlying philosophical theories and explanations to this economic degeneration over the centuries. Early civilizations considered themselves the guardians and caretakers of all living things on the lands they inhabited, and held themselves responsible for future generations. Indigenous nations of the Americas considered themselves one with all around them. There was no special word for Nature, no separation: plants, animals, and humans were considered interdependent. In this world it was the coming of the European invader, funded by their own rulers at home, that led to the eventual corporatizing of the earth which all living things share in common, into a commodity to be broken up at will, through wars and land appropriation. In the contemporary world, capital-centredness is the ultimate source of the problems unleashed by an economic order based on oligopolies, monopolies and cartels. Possibly the most important problem of today’s society is that everything is denatured and the artificial version is constructed and promoted as the ideal version. In Chemical Engineering, an entire subject is dedicated to denaturing materials (Khan and Islam, 2016). In fact, this has been a recurring theme in modern education system (Islam et al., 2013). Teachers learn to indoctrinate (anti-Education), lawyers learn to argue to turn falsehood into truth and vice versa (creating opacity instead of transparency), doctors learn to give medicines instead of addressing the cause of a malady – the list is endless. We have seen in previous chapters, a Nature-science standpoint begins with the premise that nature is the perfect model and if a process is not natural it cannot be sustainable. The problem is, the only economic model available to the current society is the one that benefits from denaturing. In fact, there is a direct correlation between profit and processing. The entire engineering discipline is based on this model, which essentially rejects any notion of finding new solutions beyond the conventional set, settling for technology transfer, turn-key projects that are not challenged beyond their marketability. In this process, the first and most important phase of denaturing is denaturing of human qualities. It is done by corporatizing the education system that sees human beings as a commodity and consumers. The education system robotizes humans that become the integral component of the master/slave yin yang. At the end, what we have is a society with extreme poverty and extreme richness and both the poor and the rich are rapidly spiraling down the path of economic extremism. This process cannot be reversed without making a paradigm shift.

A popular anecdote drives this point of socio-economic extremism. Consider the following 1. A healthy baby is born in a hospital after epidural treatment1. 2. A few days later, the baby is not responsive and feeling very lethargic. 3. The baby is rushed to the emergency, everything is considered other than the effect of epidural. Many blood tests and others are performed to check for infection. The baby goes home, ‘untreated’. 4. The mother takes the baby to a doctor. 5. The doctor gives the baby vaccines. 6. The child develops serious symptoms and is prescribed Tylenol. 7. A few days later, the child is back to doctor with symptoms of infections, and is prescribed antibiotic by the doctor. 8. The child continues to develop symptoms and more antibiotics follow, and Tylenol becomes a routine ‘home remedy’ 9. Child develops symptoms of asthma and ADHD. 10. Doctor prescribes steroids and Ritalin. 11. Mom thanks doctor for helping her child. In this particular anecdote, each activity is an economic activity with profound impact on the overall health of the child. Yet, during the same process, corporate profit skyrockets all at the expense of denaturing the child’s health (Figure 4.1). As Figure 4.1 depicts, this degeneration of the natural order is a mirror image of natural progression. They both reach an equilibrium, albeit the one at the bottom being the fictitious one. In modern economics, this fictitious equilibrium is anticipated and all measures are taken to introduce another process of denaturing which is more toxic than the previous one. That brings in the concept of Honey → Sugar → Saccharine® → Aspartame®, denoted as HSS®A®N degradation by our research group (starting with Zatzman and Islam, 2007). The HSS®A®N pathway is a metaphor representing many other phenomena and chinks of phenomena that originate from in some natural form but become subsequently engineered through many intermediate stages into “new products”, ultimately transforming into nothing, for which the profit margin approaches infinity. These “new products” include materials, technology, and thought processes. This paper identifies the HSS®A®N pathway in theories of physics as discussed by all major scientists and philosophers. Since 2007, the authors have striven to popularize this Honey → Sugar → Saccharin® → Aspartame® pattern as a kind of shorthand reference to a particularly critical insight, often overlooked or only tangentially acknowledged, into the essence of the transformation of natural into artificial. This is a pattern so characteristic and widespread across every department of modern industrialized commodity production as to have become invisible. HSS®A® represents an entire class of other processes of degradation of a gift of Nature by its commodification as a byproduct of industrial-scale organic chemistry.

Figure 4.1 Economic activities have become synonymous with corporate profiteering and denaturing of the society. In follow up papers, the works of Newton, Maxwell, Einstein, Feynman, and Hawking are reviewed and their fundamental premises deconstructed. Once identified, it becomes clear how disinformation works in the current system in the context of laws and theories of physics. One can then figure out a way to reverse the process by avoiding aphenomenal schemes that lead to ignorance packaged with arrogance. So, to sum up, the collapse of logical thinking in economics that we have discussed in Chapters 2 and 3, is the same in technology development, which is based on the same dogmatic thought process. This mode of economics drives the waste-based technology, which is inherently anti-nature. By taking the short-term approach of maximizing quarterly profit, mechanisms have been created that make the world environment continuously worse. Figure 4.2 elaborates this aspect for technology development. It shows how cost to customers, which is indicative of corporate profit, goes up exponentially as the quality of food goes down. This figure may be readily extrapolated to other aspects of social development, including politics and education.

Figure 4.2 The outcome of short-term profit-driven economics model.

4.2 Denaturing for a Profit Almost two decades ago, our research group uncovered this progression from honey to sugar to saccharin to Aspartame®, labeling it the HSSA Syndrome. Numerous titles among its publications elaborate on the repeated appearance of the degenerative results of this cycle as a metaphor for many other phenomena in which engineered interventions degrade the quality and effectiveness of existing naturally-sourced chemicals and processes. The discovery and elaboration of the HSSA pattern has rich implications for the very notion of sustainability. Our research has yet to find a single corporate-controlled chemically-engineered process today that did not invent, and-or indefinitely extend a future for itself, either by adding anti-nature elements to a natural source or by completely replacing the natural source with a chemicallyengineered substitute. Indeed: this describes the essence of the “plastics revolution” of our own time. Today, containers of literally every description as well as function-critical components all kinds of engines and other machines have been or are being replaced with chemically-engineered substitutes. These generally replace sources that were non-toxic or of relatively neutral impact when disposed of as waste in the environment. All that is really being sustained, then, is an artificially low cost-price to the consumer and maximum profit for the corporate sector. The problem here is neither one of growth or of development as things in themselves, but of how these are actually carried out, that is to say: their pathways. As processing is done, the quality of the product is decreased (along the HSSA syndrome). Yet, this process is called value addition in the economic sense. The price, which should be proportional to the value, in fact, goes up inversely proportional to the real value (opposite to the perceived value, as promoted through advertisements). Here, the value is fabricated, similar to what is done in the aphenomenal models used in economics (Chapter 3). The fabricated value is made synonymous with real value or quality (as proclaimed by advertisements), without any further discussion of what constitutes quality. This perverts the entire value addition concept and falsifies the true economics of commodity (Zatzman and

Islam, 2007). Logically, before certifying a process or system as ‘sustainable’, the conformity of the process with nature must be demonstrated. Instead, exactly the opposite criterion is used, that is to say, unless a mess has been made out of natural order a patent is not granted. Consider Table 4.1. This table lists the inherent nature of natural and artificial products. It is important to note that the left hand side statements are true – not in the tangible sense of being “verifiable”, but because there is no counter-example of those statements. The left hand side of Table 4.1 lists characteristic features of Nature. These are true features, not based or dependent on perception. Each is true insofar as no example of the opposite has been sustained. It is important to note that the following table describes everything in existence as part of universal order and applies to everything internal, including time, and human thought material (HTM). However, the source of HTM, i.e., intention, is not part of these features. At the same time, all the properties stated on the right-hand side, which assert the first premise of all “engineered products”, are aphenomenal: they are only true for a time period approaching zero, resulting in being “verifiable” only when the standard itself is fabricated. In other words, every statement on the right-hand side only refers to something that does not exist. For instance, honey molecules are considered to be extremely complex. They are complex because they have components that are not present in other products, such as sugar, which is identified as made up of “simple” molecules. Why are sugar molecules simple? Because, by definition, they are made of the known structures of carbon and hydrogen. A further review of Table 4.1 now will indicate how every item on the right-hand side is actually a matter of definition and a false premise. Sugar is not simple. Its composition is not static (why else would there be an expiration date on a product?). It is not predictable (who would have predicted 50 years ago that sugar would be the second most important cause of mortality in the USA?). It is not unique. It is not symmetric, and the list continues. The only reason sugar is promoted to be following the right-hand side of Table 4.1 is that it can then be mass-produced and effectively used to replace whatever lies on the left-hand side, so that the profit margin is increased. Whoever became a billionaire selling honey? If one considers the features of artificial products in Table 4.1 with those of Table 4.2, it becomes clear that any science that would “prove” the features (based on a false premise) in Table 4.1 is inherently spurious. However, the science of tangibles does exactly that and discards all natural processes as “pseudoscience”, “conspiracy theory”, etc. This also shows that the current engineering practices that rely on false premises are inherently unsustainable. Table 4.1 Typical features of natural processes, as compared to the claims of artificial processes (From Khan and Islam, 2016). Feature Feature of natural no. 1 Complex 2 Chaotic

Feature of artificial Simple Ordered

3 4

Unpredictable Unique (every component is different), i.e., forms may appear similar or even “self-similar”, but their contents alter with passage of time Productive Non-symmetric, i.e., forms may appear similar or even “selfsimilar”, but their contents alter with passage of time Non-uniform, i.e., forms may appear similar or even “self-similar”, but their contents alter with passage of time

Predictable Normal

Homogeneous

9 10 11 12 13 14 15

Heterogeneous, diverse, i.e., forms may appear similar or even “selfsimilar”, but their contents alter with passage of time Internal Anisotropic Bottom-up Multifunctional Dynamic Irreversible Open system

16 17

True Self healing

18 19

Nonlinear Multi-dimensional

20

Zero degree of freedom*

21 22 23 24 25 26

Non-trainable Continuous function of space, without boundary Intangible Open Flexible Continuous function of time

27

Balanced

5 6 7 8

Reproductive Symmetric Uniform

External Isotropic Top-down Unifunctional Static Reversible Closed system Artificial Self destructive Linear Unidimentional Finite degree of freedom Trainable Discrete Tangible Closed Rigid Discrete function of time Inherently

unstable *With the exception of humans that have freedom of intention (Islam et al., 2017).

Table 4.2 True difference between sustainable and unsustainable processes (Reproduced from Khan and Islam, 2012). Sustainable (natural)

Unsustainable (artificial)

Progressive/youth measured by the rate of change Non-progressive/resists change Unlimited adaptability and flexibility Increasingly self evident with time

Conservative/youth measured by departure from natural state Zero-adaptability and inflexible Increasingly difficult to cover up aphenomenal source Efficiency approaches zero as processing is increased Unsustainability unravels itself with time

100% efficient Can never be proven to be unsustainable

The case in point can be derived from any theories or “laws” advanced by Bernoulli, Newton (regarding gravity, calculus, motion, viscosity), Dalton, Boyle, Charles, Lavoisier, Kelvin, Poiseuille, Gibbs, Helmholz, Planck and a number of others who served as the pioneers of modern science. Each of their theories and laws had in common the first assumption that would not exist in nature, either in content (tangible) or in process (intangible).

4.3 Aphenomenal Theories of the Modern Era 4.3.1 Conservation of Mass and Energy The law of conservation of mass was known to be true for thousands of years. In 450 B.C., Anaxagoras said, “Wrongly do the Greeks suppose that aught begins or ceases to be; for nothing comes into being or is destroyed; but all is an aggregation or secretion of pre-existing things; so that all becoming might more correctly be called becoming mixed, and all corruption, becoming separate.” However, Antoine Laurent Lavoisier (1743–94), is credited to have discovered the law of the conservation of mass. Lavoisier’s first premise was “mass cannot be created or destroyed”. This assumption does not violate any of the features of Nature. However, his famous experiment had some assumptions embedded in it. When he conducted his experiments, he assumed that the container was sealed perfectly. This would have violated the fundamental tenet of nature that an isolated chamber can be created. Rather than recognizing the aphenomenality of the assumption that a perfect seal can be created, he “verified” his first premise (law of conservation of mass) “within experimental error”. The error is not in the experiment, which remains real (hence, true) at all times. No, the error is in fact embedded within the first premise—that a perfect seal had been created. By avoiding confronting this

premise, and by introducing a different criterion (e.g., experimental error), which is aphenomenal and, hence, non-verifiable, Lavoisier invoked a European prejudice, linked to the pragmatic approach, that is “whatever works is true” (Islam et al, 2010). This leads to the linking of measurement errors to the outcomes, creating an obstacle to the possibility of independent or objective validation of the theory (Islam et al., 2013). What could Lavoisier have done with the knowledge of his time to link this to intangibles? For instance, if he had left some room for a possible leak from the container, modern day air conditioner designs would have taken into account how much Freon is leaked into the atmosphere. Lavoisier, nevertheless, faced extreme resistance from scientists who were still firm believers of the phlogiston theory (from the Greek word phlogios = ‘fiery’). This theory was first promoted by a German physician, alchemist, adventurer, and a professor of Medicine—Johann Joachim Becher (1635–1682). This theory recognizes a form or state of matter, named phlogiston, existing within combustible bodies. When burnt (energy added), this matter was thought to have been released to achieve its “true” state. This theory enjoyed the support of the mainstream European scientists for nearly 100 years. One of the proponents of this theory was Robert Boyle, the scientist, who would later gain fame for relating pressure with volumes of gas. Mikhail Vasilyevich Lomonosov was a Russian scientist, writer and polymath who made important contributions to literature, education, and science. He wrote in his diary: “Today I made an experiment in hermetic glass vessels in order to determine whether the mass of metals increases from the action of pure heat. The experiment demonstrated that the famous Robert Boyle was deluded, for without access of air from outside, the mass of the burnt metal remains the same.” Albert Einstein came up with a number of theories, none of which are dubbed a “law”. The most notable theory was the theory of relativity. Unlike any other European scientists of modern time, this theory recognized the reality of Nature as the proper standard of truth for the purposes of science. This was a refreshing approach considering that the ‘steady-state model’ had been in use since Aristotle’s time. Ironically, the very first scientific article that mentioned relativity after Einstein, was by Walter Kaufmann, who “conclusively” refuted the theory of relativity. However, the point that Kaufmann didn’t make is that Einstein’s time function was in reverse order. Instead of making the mass, or environment or event a function of time, he made time a function of perception. As pointed out by Islam et al. (2014), perception as well as human thought material (HTM) is a function of the environment, which is a function of time. Even though this “conclusive” refutation did not last very long, one point continues to obscure scientific studies, which is the expectation that something can be “proven”. This is a fundamental misconception as outlined by Zatzman and Islam (2007a) and more recently by Islam et al. (2013, 2017). The correct statement in any scientific research should involve discussion of the premises a body of research is based on. The first premise represents the one fundamental intangible of the thought process. If the first premise is not true because it violates one or more fundamental feature(s) of Nature, the entire deduction process is corrupted and no new knowledge can emerge from this deduction. Einstein’s equally famous theory is more directly involved with mass conservation. Using the

first premise of Planck (1901), he derived E = mc2. Einstein’s formulation was the first attempt by European scientists to connect energy with mass. However, in addition to the aphenomenal premises of Planck, this famous equation has its own premises that are aphenomenal (see Table 4.3). However, this equation remains popular and is considered to be useful (in a pragmatic sense) for a range of applications, including nuclear energy. For instance, it is quickly deduced from this equation that 100 kJ is equal to approximately 10–9 gram. Because no attention is given to the source of the matter or the pathway, the information regarding these two important intangibles is wiped out from the science of tangibles. The fact that a great amount of energy is released from a nuclear bomb is then taken as evidence that the theory is correct. By accepting this at face value (heat as the one-dimensional criterion), heat from nuclear energy, electrical energy, electromagnetic irradiation, fossil fuel burning, wood burning or solar energy, becomes identical. This has tremendous implications on economics, which is the driver of modern engineering. Table 4.3 Features of external entity (from Islam, 2014). Feature no. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Feature Absolutely external (to everything else) All encompassing No beginning No end Constant (independent of everything else) Uniform Alive Infinity Absolutely True Continuous All pervasive in space All pervasive in time Infinite degree of freedom Unique Open system Dissimilar to everything else Absolute Time that control time that controls mass Absolute mass (pure light)

4.3.2 Other Theories and ‘Laws’

Table 4.1 lists the properties of natural entities that are part of creation and form integral parts of the universal order. Table 4.3 lists the fundamental features of the external entity. The existence of an external entity is necessary condition in order to eliminate the notion of void that had been inherited from Atomism philosophy and was carried forward by first Thomas Aquinas and then by subsequent scientists, without exception (Islam, 2014). This external entity was first recognized as God (ancient Greek philosophers all the way to Avicenna and Averroes of Islamic golden era), then conflated as plenum and aether (Islam et al., 2013; 2014). While the existence of such entities has been denied and sometimes ‘proven’ to be non-existent, the traits of this external entity have been included in all forms of ‘fundamental’ particles, ranging from photon to Higgs boson. In addition, such features have also been invoked in galactic models in the form of various entities, ranging from “dark matter”, “black hole” to “absolute void”. Newton introduced this as ‘external’ force and defined it as the originator of differential motion. The original Averroes concept, as supported by the Qur’an was that such originator of motion is the Creator, whose traits are all different from the traits of creation. As will be discussed in latter chapters, all features of this external entity also have duality, which is a characteristic feature of that entity. For instance, Absolute time (Feature 17) and Absolute mass (Feature 18) are opposite to each other. Similar duality exists in other traits of the external entity. Table 4.4 shows many currently used ‘laws’ and theories, all of which emerged from the New Science era after the Renaissance. Note how the first premises of practically all of these theories violate fundamental features of Nature. Only the conservation of mass — which in fact has its root in ancient times — does not have an aphenomenal first premise. New Science has given us only theories and ‘laws’ that have a spurious first premise, as evidenced by using Averröes’ criterion for phenomenality (Zatzman and Islam, 2007a). Table 4.4 How natural features are violated in the first premise of various ‘laws’and theories of the science of tangibles (Islam et al., 2014). Law or theory

First premise

Conservation of mass

Nothing can be created or destroyed

Quantum theories

Anything can be created from nothing; everything has multiple history No energy can be created or destroyed in isolation with mass 14 billion years ago, there was a super hot entity of

Conservation of energy Big bang theory

Features violated (see Table 4.1) None, but applications used artificial boundaries 4, 6, 22, 23, 26 22, 23, 26 1,3,6,9,14,24,26

Big chill theory

Saul Perimutter and Brian Schmidt (2011 Nobel Prize)

infinite mass and zero volume that has been expanding after the big bang, super hot 14 billion years ago, there was a super chill entity of 1,3,6,9,14,24,26 infinite mass and zero volume that cracked and exploded into infinite pieces Universe is expanding with acceleration 1,3,6,9,14,24,26

Higgs Boson (2013 Uniform, discrete, symmetric, fundamental particles of 4, 7, 22, 23 Nobel Prize) zero mass, empty space in between Atomic theory Uniform, symmetric, discrete, fundamental particles of 4, 7, 22, 23 finite mass Einstein’s light Photons of zero mass and constant speed 4, 7, 13, 22, 23, theory 26 Genetic theories Genes fundamental building blocks of living organism 4, 6, 13, 22, 23, in isolation 26 Defective genes Inherent defects of genes in isolation 4, 6, 13, 17, 22, 23, Probability Steady state, repetitive and repeatable 4, 5, 6, 13, 16, theories 22, 23, 26 Relativity Time function of perception, perception function of 4, 7, 16, 22, 23, person 26, 27 Gravitational Force function of mass, steady state, time function of 4, 7, 16, 22, 23, gravity 26, 27 Cosmic theories Empty space between celestial bodies that are 4, 7, 16, 22, 23, expanding or contracting 26, 27 Lavoisier’s Perfect seal 15 deduction Phlogiston theory Phlogiston exists 16, 22, 23, 26 E = m c2 Mass of an object is constant 13, 22, 23, 26 Speed of light is constant 13, 22, 23, 26 Nothing else contributes to E 14, 19, 22, 20, 23, 24 Planck’s law If the medium is of homogeneous and isotropic 5, 8, 10, 17, 22, constitution, then the radiation is homogeneous, 23, 26 isotropic, unpolarized, and incoherent. Aether theory

Zero mass, zero energy

16, 22, 23, 26

Quantum Cosmic theory Charles

Infinite mass, zero energy

16, 22, 23, 26

Fixed mass (closed system), ideal gas, Constant pressure,

24, 3, 7

Boyles

A fixed mass (closed system) of ideal gas at fixed temperature

24, 3, 7

Kelvin’s

Kelvin temperature scale is derived from Carnot cycle and based on the properties of ideal gas Energy conservation (The first law of the thermodynamics is no more valid when a relationship of mass and energy exists) Based on Carnot cycle which is operable under the assumptions of ideal gas (imaginary volume), reversible process, adiabatic process (closed system) Thermal equilibrium

3, 8, 14, 15

Incompressible uniform viscous liquid (Newtonian fluid) in a rigid, non-capillary, straight pipe No energy loss to the sounding, no transition between mass and energy

7, 22, 23, 25, 26

Thermodynamics 1st law Thermodynamics 2nd law Thermodynamics 0th law Poiseuille Bernouilli

22, 23, 26

3, 8, 14, 15

10, 15

15, 22, 23, 26

Newton’s 1st law

A body can be at rest and can have a constant velocity 13, 22, 23, 26

Newton’s 2nd law

Mass of an object is constant Force is proportional to acceleration External force exists

7, 13, 14, 16, 18, 22, 23, 26, 27

Newton’s 3rd law Newton’s viscosity law Maxwell’s equation Newton’s calculus Fractal theory

The action and reaction are equal Uniform flow, constant viscosity

3, 22, 23, 26 7, 13, 22, 23, 26

Uniform, spherical, rigid balls form energy

4, 7, 22, 23, 26

Limit ∆t → 0 Single pattern that repeats itself exists

22, 23 1–4, 6, 8, 10

If all theories of New Science are based on premises that violate fundamental traits of nature, such laws and theories, if applied as universal laws and theories, should weaken considerably or worse, implode. They can be applied only to certain fixed conditions that pertain to ‘idealized’ situations existing nowhere in nature. For example, it can be said that the laws of motion developed by Newton cannot explain the chaotic motion of nature due to its assumptions that contradict the reality of Nature.

The experimental validity of Newton’s laws of motion is limited to describing instantaneous macroscopic and tangible phenomena. However, microscopic and intangible phenomena are ignored. In this regard, the information age offers us a unique opportunity in the form of 1) transparency (arising from monitoring space and time); 2) infinite productivity (due to the inclusion of intangibles, zero-waste, and transparency); and 3) custom-designed solutions (due to transparency and infinite productivity). However, none of these traits has any meaning if we don’t have a theory with a correct hypothesis. Islam et al. (2015) addressed the most important theories advanced in the modern age and deconstructed them in order to set stage for a comprehensive theory that can explain natural phenomena without resorting to dogma. These theories are widely deemed to be ‘revolutionary’ in the sense of having caused a ‘paradigm shift’ in their respective fields. They contended, however, that all these theories are rooted in fundamentally flawed theories and ‘laws’ from the time of Atomism. Here is the list of theories. 10. Information theory: Claude Shannon, 1948 Information theory is a branch of applied mathematics, electrical engineering, and computer science involving the quantification of information. Information theory was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data. This theory is important because it changes intangibles into tangibles. Since its inception it has broadened to find applications in many other areas, including statistical inference, natural language processing, cryptography, neurobiology, the evolution and function of molecular codes, model selection in ecology, thermal physics, quantum computing, linguistics, plagiarism detection, pattern recognition, anomaly detection and other forms of data analysis. This theory is based on fundamental frequencies and waveforms that are non-existent in nature. 9. Game theory: John von Neumann and Oskar Morgenstern, 1944 (with important embellishments from John Nash in the 1950s) Even though originally developed for Economics, Game theory is used in political science, psychology, logic, computer science, biology, and various other disciplines in relation to risk management. The fundamental premise of this theory is that the resources are finite and nonrenewable. It also adds other spurious premises related to probability, Atomism, human-nonhuman interactions, and static nature of the society. 8. Oxygen theory of combustion: Antoine Lavoisier, 1770s Lavoisier did not discover oxygen, but he figured out that it was the gas that combined with substances as they burned. Lavoisier thereby did away with the prevailing phlogiston theory and paved the way for the development of modern chemistry. Underlying premises of Lavoisier have been deconstructed in Chapter 4. 7. Plate tectonics: Alfred Wegener, 1912; J. Tuzo Wilson, 1960s Wegener realized that the continents drifted around as early as 1912. But it was not until the

1960s that scientists put the pieces together in a comprehensive theory of plate tectonics. Wilson, a Canadian geophysicist, was a key contributor of some of the major pieces, while many other researchers also played prominent roles. This theory uses the same premise that other scientists have used to describe origin of universe. Such premises have been deconstructed by Khan and Islam (2016). 6. Statistical mechanics: James Clerk Maxwell, Ludwig Boltzmann, J. Willard Gibbs, late 19th century By explaining heat in terms of the statistical behavior of atoms and molecules, statistical mechanics made sense of thermodynamics and also provided strong evidence for the reality of atoms. Besides that, statistical mechanics established the role of probabilistic math in the physical sciences. Modern extensions of statistical mechanics (sometimes now called statistical physics) have been applied to everything from materials science and magnets to traffic jams and voting behavior. This theory invokes spurious premises related to Newtonian mechanics, Atomism, and probability, which is deconstructed by Islam et al. (2015). 5. Special relativity: Albert Einstein, 1905 This theory is revolutionary for the fact that it includes time as an implicit function. However, it introduces spurious premises that will be discussed in this chapter. In itself, it is based on Maxwell’s theory, which in turn is based on Newtonian description of matter. 4. General relativity: Einstein, 1915 It is conventionally perceived that General relativity was much more revolutionary than special relativity, because it replaced Newton’s law of gravity in favor of curved spacetime. It gave rise to the emergence of a series of cosmic theories, ranging from Big bang to blackholes. 3. Quantum theory: Max Planck, Einstein, Niels Bohr, Werner Heisenberg, Erwin Schrödinger, Max Born, Paul Dirac, 1900–1926 Quantum theory replaced the entire fabric of classical physics that was based on Newtonian mechanics. In the HSSA degradation mode, Quantum theories represent the worst form of cognition. 2. Evolution by natural selection: Charles Darwin, 1859 Darwin showed that the intricate complexity of life and the intricate relationships among lifeforms could emerge and survive from natural processes. This theory has been deconstructed by Islam et al. (2010, 2015), especially in relation to its extension to human society. Fundamentally, this theory is similar to Quantum theory and applies similar spurious premises. 1. Heliocentrism: Copernicus, 1543 Eurocentric prejudices dictate that such ‘great insight’ belonged to the ancient Greeks. While Copernicus was the first to challenge the Establishment in Europe in favour of natural cognition, we argued that Islamic scholars have been using far more powerful cognition tools for some 1000 years prior to Copernicus. This 1000 years of history is wiped out from New Science, triggering a cognition tool far worse than dogma itself. This aspect has been

discussed in Chapter 3 and will be discussed in this chapter and beyond.

4.3.3 The Consequences We already identified that almost all the theories and “laws” of the modern age have spurious assumptions behind them. It was also established that New Science is insufficient to account for natural phenomena, thereby making it impossible to design processes that are insightful in true sense of knowledge. At present, numerous debates break out in favor and against any study that appears in the mainstream literature. Both sides use New Science to make their points, without questioning the validity of the “laws” and theories of New Science. In this book, the premises behind all of these laws and theories are challenged. Just like what happened in global warming for which each party calls the other party ‘the flat-earth theorist’ or ‘conspiracy theorist’, debates rage on as to every point of modern medical and chemical industries. Ironically, scientists all believe in the “chemicals are chemicals” or “energy is energy” mantra debate over why organic food and wood stove are better than their toxic alternatives but they all agree that it’s carbon or heat that cause cancer. Just like the global warming debate, for which no one asks ‘how could carbon dioxide be the enemy when we need carbon dioxide for producing life-sustaining plants,’ no one wonders how high temperature or carbon can give cancer when carbon is the essence of life and clay ovens produced healthy breads for thousands of years – all at high temperatures. No amount of doctrinal sermon can explain these contradictions, particularly as the same group, which promotes nuclear as “clean” energy, considers genetically modified, chemical fertilizers and pesticide infested crops derivatives processed through toxic means as “renewable”. This same group also proclaims that electricity collected with toxic silicon photovoltaics and stored with even more toxic batteries – all to be utilized through the most toxic “white light” – as sustainable. In the past, the same logic has been used in the “I can’t believe it’s not butter” culture that saw the dominance of artificial fat (transfat) over real fat (saturated fat) as geared toward creating a similar crisis involving water (CBC, Dec. 19, 2008; Icenhower, 2006). Classical dynamics as represented by Newton’s laws of motion, emphasizes fixed and unique initial conditions, stability, and the equilibrium of a body in motion (Islam et al., 2010). However, as the list below in Table 4.4 serves to clarify, it is not possible with the ‘laws’ and theories of Table 4.5 to make a distinction between the natural products and their corresponding artificial substitutes (see below). Consequently, the same theories that formed the basis of engineering the artificial products cannot be called upon to make the reversal.

Table 4.5 Transitions from natural to processed. Wood → plastic Glass → PVC Cotton → polyester Natural fiber → synthetic fiber Clay → cement Molasses → Sugar Sugar → Sugar-free sweeteners Fermented flower extract → perfume Water filter (Hubble bubble) → cigarette filter Graphite, clay → chalk Chalk → marker Vegetable paint → plastic paint Natural marble → artificial marble Clay tile → ceramic tile Ceramic tile → vinyl and plastic Wool → polyester silk → synthetic Bone → hard plastic Organic fertilizer → chemical fertilizer Adaptation → bioengineering The above transitions embody the main bulk of modern technological developments that have been characterized by Nobel laureate chemist, Robert Curl as a ‘technological disaster’. This process has affected the water resources the most, followed by petroleum – the 2nd most abundant liquid on Earth. The most toxic chemical that has emerged from this process is plastic materials that have revolutionized modern era, aptly called the ‘plastic era’ for over a century. Plastic waste has in turn polluted everything on the earth crust. Most notable of the affected natural chemical is the salt (sodium chloride) – the most important chemical of a human blood system – is contaminated by plastic around the world. A recent study (Genza, 2017) shows that sea salt around the world has been contaminated by plastic pollution, adding to experts’ fears that microplastics are becoming ubiquitous in the environment and finding their way into the food chain via the salt in our diets. New studies have shown that tiny particles have been found in sea salt in the UK, France and Spain, as well as China and now the US. Up to 12.7 m tonnes of plastic enters the world’s oceans every year, equivalent to dumping one garbage truck of plastic per minute into the world’s oceans, according to the United Nations. This represents the

symptom of the ‘plastic addiction’, which is synonymous with the plastic era of over 100 years right up to the Information Age.

4.4 The Sugar Culture and Beyond In the HSSA degradation process of turning natural into artificial, the most important transition has been from honey to sugar. This transition has become iconic to the overall technology development process. The invention of sugar correlates strongly with the most meaningful degradation in overall health and lifestyle in human society during recorded history. While molasses were being manufactured without introducing any health hazards, artificial sugar meant the introduction of numerous toxic chemicals that are simply not suitable for human consumption. However, this was introduced as a symbol of civilization. Sugar is white, and it is sweeter than molasses, and far more profitable than honey. Today, the world consumes some 110 million tones of sugar annually. Yet, sugar is prepared following a process that has never existed in nature. If one starts with the premise that unnatural is not sustainable, the production of sugar would mark a clear divergence from sustainable technology. If one has any confusion as to the existence of sugar as a natural process, one should be reminded of the process involved in manufacturing sugar. For instance, it involves crushing sugar cane, following by ‘cleaning’ with calcium hydroxide, a synthetic chemical that is not used by natural processes for anything of benefit, let alone for cleansing. At a later stage, brown sugar is ‘refined’ with chemical bleach, a potent toxin that oxidizes useful nutrients to render sugar ‘white’. Then a series of other ‘refining’ material, such as, chalk, granulated carbon, etc. may be introduced. While this color is appealing to the public that would consume sugar unconsciously, what does it say about the long-term sustainability of the food that just got reduced into a toxin? Indeed, anyone with conscious and minimum knowledge about how food is processed in a human body would have prevented engineers from employing such a technique. This intervention did not happen. Instead the entire engineering discipline focused on how quickly more sugar can be manufactured, while marketing agencies went out and found new markets to sell the product. Two questions arise here. First, why did an enlightened group of people resort to using toxins to process food? Were they that much intellectually bankrupt that they could not find a technique that nature has already in place? After all, what can be cleaner or whiter than milk? Why did the goodness of milk have to be compromised in nature? It is often understood, it is because denaturing leads to increasing the profit margin. Natural processes can never be fully “cost-effectively engineered” in the sense of “subjected to command and control regimes consistent with maximizing profit in the shortest possible time”. They are not amenable to the principle desideratum of mass production, which is profit based on minimum input costs. Minimizing input costs is not possible in a mass production context without sacrificing quality — usually through conversion of the real/natural into something artificial. If a reality index2 were associated with pricing, the corrected profit would invariably always

be negative. In reality, nothing is cheaper than natural products as long as sunshine and mother’s milk are still available “for free”, i.e., at no charge or at no cost to one’s capital outlay. How does this obvious logic elude modern-day academics? If the focus is so short-term that long-term benefits and short-falls are entirely disregarded in all economic calculations (Zatzman and Islam, 2007), it will indeed elude the powers of conventional observation. Conventionally, long-term costs, including costs of damaging the quality of a product or polluting the environment are disregarded, leaving the general public to pick up the remedial costs much later. This much about immediate practice is almost trivially obvious. In the absence of an economic theory that includes long-term elements, however, any engineered product can be marketed as anything else, covering the economic bottom line. This is far from obvious and the perfect cover for a system that is entirely artificial from root to surface. In this process, engineers have been playing a robotic role. They had no option to look into the natural order for solutions. This robotization starts early in the education system, and pervades all disciplines. It did not take humanity long to detect the effects of the sugar culture. For nearly a century, it has been known that sugar is responsible for non-genetic diabetes. Any reasonable consideration and rational reaction of this superflux of diabetes would lead to health warnings against sugar and to minimizing its consumption. Yet, the exact opposite happened. Sugar consumption skyrocketed as more and more processed food and fast food hit the marketplace. Sugar was introduced even as the first drink a newborn gets, displacing the age-old practice of giving honey to a newborn. Based on flawed analysis, honey was in fact banned from pediatric sections of the hospital and labels slapped on honey containers, warning people that honey can cause botulism – an utterly aphenomenal conclusion.3 Today, sugar or similar sweeteners are ubiquitous, some food containing 75% sugar (Gillespie, 2010). Over time, more ‘side effects’ of this sweet poison have emerged. For example, addiction to refined sugar is more problematic than addiction to cocaine, and is associated with obesity, cancer, and diabetes (Goldwert, 2012). Chemical engineering research has focused on several fronts, all maximizing short-term economic benefits. For instance, the notion that ‘chemicals are chemicals’ irrespective of their natural or artificial origin and components was used to sell the general public n the idea that natural sugar is the same as refined sugar, therefore, refined, i.e. artificial, sugar should be preferred because it’s cheaper. After all, if honey has just as many calories as sugar but costs twice as much, the immediate practical reason to opt for honey disappears. Once this dogma of refinement trumps natural availability entrenched itself, research could focus —and indeed has focused — on developing cheaper and more effective forms of sweeteners. Munro, D., 2015, Inside The $35 Billion Addiction Treatment Industry, Forbes, April 27. Biomedical engineering research revealed the addictive nature of sugar. As research began to reveal addictive nature of sugar, it was taken as a boon. It causes a euphoric effect that triggers dopamine, the chemical that controls pleasure in the brain. This should have triggered in a conscientious mind that natural sugar which produces natural glucose — the only food for the brain — cannot possibly be the same as artificial sugar that intoxicates the brain. Recently,

Munro (2015) presented how such epidemic of addiction is turned into a huge profit making industry, thus completing the loop between food companies and pharmaceutical companies. For alcohol addiction, Munro quoted, Harvard Graduate, addiction expert, and author, Dr. Lance Dodes: “I became the Director of the Alcoholism Treatment Unit at Harvard’s McLean Hospital. I’ve probably treated a couple of thousand people who have one addiction or another. Almost all residential treatment programs in the United States are 12 Step based, so their effectiveness will depend entirely on whether 12 Step programs work and the statistics for AA are not good. It is helpful for 5–10% and that’s a good thing. That’s 5–10% of people who are being helped by A.A. – it’s a lot better than zero percent – but it shouldn’t be thought of as the standard of treatment because it fails for most people – for the vast majority of people.” Add to it the fact that scientists are beginning to discover that sugar may actually be more addictive than cocaine (for instance, Sullum, 2013 some of the scholarly articles that opened up the debate. While it is known that sugar causes many ailments, it is little known that the fact sugar is addictive is a motivating factor for the food industry, who then colludes with the pharmaceutical industry while the academic community remain totally preoccupied with advancing ever growing number of paradoxical theories (Satel and Lilienfeld, 2014) The mindset of drug dealers is the moral equivalent of such a marketing policy. Medical research groups focused on ‘fighting’ symptoms of sugar. Because sugar consumption led to non-genetic diabetes, the immediate replacement of natural insulin with artificial insulin became the focus. Anyone with common sense and good conscience would be able to see this ‘remedy’ to diabetes as devastating as attaching an artificial limb because the limb had a cut that would otherwise heal naturally. Medical professionals, however, put diabetes patients on permanent insulin. Considering that insulin must be produced internally for it to have natural hold of the sugar burning process in an organic environment, how scientific is this? A different campaign involved engineering the fabrication of poisons in order to fight bacteria that thrive under sugary conditions. It became a common practice to use toxic chemicals, such as sodium nitrate, sorbic acid, sulfur dioxide, benzoate, and others to fight off bacterial growth. Magically, these toxins not only restored the original longevity of food (before adding sugar), they also increased shelf life! This was considered to be great technological progress in the eyes of modern corporations and corporatizers. The obsession to alter natural properties in order to ‘fight’ bacteria or natural decay was so intense that, by the 1960s, use of gamma-rays to kill bacteria became a common practice. It was assumed that the process of irradiation itself would not affect food. A different marketing group began the campaign of vilifying natural fat. It was nothing but a publicity stunt as anyone with logical thinking should have known natural fat is necessary for sustenance of life. The biggest accomplishment of Margarine-producing companies, the first artificial fat, which was derived from animal fat in the Napoleon era, was to render non-edible vegetable oil into artificial butter. Before, the fraud of trans fatty acid was detected, and a huge campaign that started in 1970s culminated in 1990s when USA Health department actively

campaigned in favor of artificial fat and calories against natural fat and natural sugar (as in fruits, etc.). The Food pyramid was replaced with a dart board that placed artificial food at the centre, and ‘fat-free’ became the sign of good health. The sugar peddlers soon discovered “if you take fat out of food, it tastes like cardboard;” therefore, fat was replaced with sugar. The sugar consumption saw an unprecedented growth. See Figure 4.3.

Figure 4.3 Millions of tons of sugar produced globally over the years (from Website 2). Increased sugar consumption is likely to be concentrated in developing countries (Figure 4.4). Asia and Africa will show the most growth, with growth in Asia attributable to population growth rates, economic development and changing tastes and preferences. In Africa the effect of population growth is expected to surpass the decline in per capita sugar consumption. Central America, South America and the Caribbean have shown a steady increase in consumption, mainly as a result of an increase in population as well as globalization. Sugar consumption in industrial countries will decline, although this fall should be more than offset by growth in Asia and Africa alone. In North America and the EU, consumption is stagnant: the population is only growing slowly and the effect of rising incomes on expenditure on sugar and sugar-containing products is minimal. In the USA, High Fructose Corn Syrup (HFCS/in Europe called HFS or High Fructose Syrup), is displacing evermore sugar, though at a slower growth rate than in the past. In Central Europe and the Former Soviet Union (FSU), consumption has decreased significantly as economic transformation takes place, however seems to be increased again.

Figure 4.4 Sugar production history by region (From Islam et al, 2015). The three largest sugar consumers are India, the EU 15 and the Former Soviet Union (see Table 4.6). Consumption in the FSU and the USA has fallen sharply, but has risen significantly in India, China and Pakistan. The highest per capita consumption occurs in Brazil, with Mexico in second place. China has the lowest per capita consumption. However, China is the place that leads in saccharine production. These will all correlate with diabetes, cancer, and other ailments that are considered to be driven by genetics. Table 4.6 Sugar consumption for various regions/countries (from Islam et al., 2015). Country India EU

Total sugar consumption (in million tons) 1980 1990 1996 2001 5,60 11,07 14,75 20,0 10,50 13,067 14,525 14,6

FSU* USA Brazil China Mexiko Pakistan Indonesia Japan Total

12,40 8,93 6,55 4,30 3,23 0,89 1,73 2,70 56,84

13,40 7,85 6,62 7,13 4,43 2,29 2,65 2,83 71,32

10,27 8,73 8,30 8,50 4,25 2,93 3,25 2,60 78,12

10,5 9,5 9,8 10,2 5,0 3,6 3,2 2,4 89,1

% of world consumption 14,5 10,6 7,6 5,4 7,1 7,4 3.6 2,8 2,3 1,7 64,5

Per capita consumption in kg 1980 1990 2001 8.3 13.4 15,7 31.1 38.1 34,5 46.7 39.2 54. 4.3 46.5 10.7 11.8 23.1 -

46.2 31.4 44. 6.2 54.5 20.4 14.8 22.9 -

37,0 29,0 53,1 6,3 46,7 24,5 15,8 19,0 -

*FSU=Former Soviet Union. The three largest sugar consumers are India, the EU 15 and the Former Soviet Union (see Table

4.6). Consumption in the FSU and the USA has fallen sharply, but has risen significantly in India, China and Pakistan. The highest per capita consumption occurs in Brazil, with Mexico in second place. China has the lowest per capita consumption. However, China is the place that leads in saccharine production. These will all correlate with diabetes, cancer, and other ailments that are considered to be driven by genetics. The consumption of sugar in Asian countries is increasing as a direct result of lower sugar prices and freer availability. In the last 20 years, sugar consumption sugar consumption in Asia increased by 26 Million tons. 38% of world sugar consumption belongs to Asia. Figure 4.5 shows the chemical structure of a typical sugar molecule. Note how no information pertaining to catalysts and numerous chemicals is attached to the molecular structure.

Figure 4.5 Sugar structure (note how all catalysts disappear). As more and more people got addicted to sugar, an entire generation became afflicted with sugar-induced health problems. This crisis, along with the mantra that fat is evil, led to the development of alternative sweeteners that are “sugar-free”. A new index started to surface: the measuring of everything in terms of calories, indicative of unhealthiness. So, an entire line of artificial products was manufactured, all focusing on maximizing sweetness and minimizing cost. It meant the introduction of Saccharin®.

4.5 The Culture of the Artificial Sweetener Even though sugar that is artificially processed was the beginning of the culture of artificial sweeteners, there is some consensus that truly artificial began with Saccharine. Saccharin (C7H5NO3S) was discovered in 1878 in the Johns Hopkins University laboratory of Ira Remsen, a professor of chemistry. At age 21, Remsen had graduated with honors from the College of Physicians and Surgeons at Columbia University, earning an M.D. He abandoned his medical career to pursue organic chemistry,4 first at the University of Munich, and then at the University of Göttingen, where he studied with Rudolph Fittig and began research on the oxidation of toluene isomers. In Fittig’s lab Remsen also studied sulfobenzoic acids, eventually publishing 75 papers on these and related compounds, laying the groundwork for the discovery of benzoic sulfinide— saccharin. Remsen returned to the United States in 1876—bringing with him influential German ideas about chemistry education—and accepted a professorship at Johns Hopkins. There, he

continued his research on the oxidation of methylated sulfobenzoic acids and their amides. In 1877 a Russian chemist named Constantin Fahlberg was hired by the H.W. Perot Import Firm in Baltimore. Fahlberg studied sugar, while H.W. Perot imported sugar. The company enlisted him to analyze a sugar shipment impounded by the U.S. government, which questioned its purity. H.W. Perot also hired Remsen, asking him to provide a laboratory for Fahlberg’s tests. After completing his analyses and while waiting to testify at trial, Fahlberg received Remsen’s permission to use the lab for his own research. Working alongside Remsen’s assistants, Fahlberg found the lab a friendly place. In early 1878, Remsen granted Fahlberg’s request to take part in the institute’s research. One night that June, after a day of laboratory work, Fahlberg sat down to dinner. He picked up a roll with his hand and bit into a remarkably sweet crust. Fahlberg had literally brought his work home with him, having spilled an experimental compound over his hands earlier that day. He ran back to Remsen’s laboratory, where he tasted everything on his worktable—all the vials, beakers, and dishes he used for his experiments. Finally he found the source: an overboiled beaker in which o-sulfobenzoic acid had reacted with phosphorus (V) chloride and ammonia, producing benzoic sulfinide. Although Falhberg had previously synthesized the compound by another method, he had no reason to taste the result. Serendipity had provided him with the first commercially viable alternative to cane sugar. Fahlberg and Remsen published a joint paper on their discovery in February, 1879. Remsen had a personal disdain for commercial ventures. However, Fahlberg aggressively pursued the commercial potential of the new compound. He named it “Fahlberg’s saccharin” and patented it without informing Remsen, infuriating him. Soon, the production and commercialization of saccharin became prominent. However, there were oppositions. The first one was a German company, Dye Trust, that brought down the price from $4.50 to $1.50 per kg. Locally, it was criticized for being ‘false sugar’ and having no food value. However, this burst of common sense was silenced by none other than the former U.S. President, Theodore Roosevelt, who responded to the call of banning saccharin with his infamous line, “anyone who says saccharin is injurious to health is an idiot” (Schwarcz, 2002). 135 years later, the world would still try to figure out how anti-sugar this sugar substitute is (Suez et al., 2014). Saccharin® is produced from petroleum products. Scientifically speaking, sugar has a natural source that is food (e.g., sugar cane or beet), but Saccharin® has a natural source that is not food (petroleum products are vegetables that are millions of years old). Logically, any movement toward aphenomenality would make the sweetener even more addictive, but that is considered to be a boon by a culture of greed. No in-depth health studies were necessary to ascertain that Saccharin®, being far more artificial than sugar, would be worse than sugar. However, no one commented on this aspect and Saccharin® enjoyed an unprecedented growth based on the following “qualities”, each of which were based on a dogmatic assumption; 1) it is not sugar; 2) it has little caloric content (because all calorie measuring techniques use burning ability as a measure of calorie), therefore “it must be good for weight loss”; 3) it does not cause tooth decay similar to sugar; and 4) it requires less preservatives (because it is

actually toxic to bacteria) than sugar does. The sugar culture took a new turn, pointing to a new low in science and logical thinking. All signs show it was deliberate and sanctioned from the top of the Establishment. As stated earlier, even the U.S. president interfered. Theodore Roosevelt also wrote to Mr. Queeny on July 7, 1911 (Islam et al., 2015): “I always completely disagree about saccharin both as to the label and as to its being deleterious … I have used it myself for many years as a substitute for sugar in tea and coffee without feeling the slightest bad effects. I am continuing to use it now: Faithfully your, T. Roosevelt” Seven years later, Roosevelt would die in his sleep at Sagamore Hill as a result of a blood clot detaching from a vein and traveling to his lungs. He was 60 years old. The point here is the level of idiocy packaged in the form of power. Many talk about dogma and the role of religion, but many fewer see how this is about Money and control — the true ‘axis of evil’ today. Over 100 years later, another Nobel Peace laureate President, Barack Obama, how himself is a smoker signed off on a bill to protect Monsanto and legalize uncontrolled use of genetically modified organisms (GMO) and genetic engineering (GE). Engdahl (2007) had outlined the evils of GMO, but technology development has not been about welfare of the society. In 2013, President Obama inked his name to H.R. 933, a continuing resolution spending bill approved in Congress days earlier. Buried 78 pages within the bill exists a provision that protects biotech corporations such as the Missouri-based Monsanto Company from litigation. In light of approval from the House and Senate, more than 250,000 people signed a petition asking the president to veto the spending bill over the biotech rider tacked on, an item that has since been widely referred to as the ‘Monsanto Protection Act.’ When taken in context, GMOs are another in a long line of environmentally damaging practices for short term gain/profit. From the large-scale deforestation of the world’s old-growth forests, to sustenance farming, to modern imported-fertilizer/pesticide/herbicide/fossil-fuel-dependent industrial agriculture, the trend has been consistent: GMOs are just another in that line of attempts to temporarily maintain/raise crop yields. Regardless of the type of agriculture or the location, there are limits to how long any land can remain productive. Applying imported fertilizers, or utilizing GMOs, only provides a temporary disruption of the land’s transition to non-productive “wasteland”, and ultimate desertification. At this point we cannot resist sharing with readers Ronald Reagan’s cigarette commercials glamourizing smoking (Picture 4.1). Ronald Reagan, the ‘Star Wars’ president, was considered the most popular president ever, although the ‘war’ (named for a popular science fiction television program) was actually about developing weapons systems to control ‘conquer’ outer space by the US against its perceived enemies. This scheme required such an enormous outlay of the collective wealth of the U.S. that it was deemed, after long debate, unworkable. However, it has become a vastly successful commercial enterprise. Star Wars toys, stories, and movies are one of the to be most popular forms of entertainment even for adults. In the prevailing U.S. culture, dominance is synonymous with weapons of mass destruction, which can be simultaneously morphed into consumer products engineered to becoming part of the human cultural space (Carey 1995). Few see the connection between mass destruction and

onset or numerous chemical and pharmaceutical products. It has become a matter of continuous decline with increasing power to the group that pander artificial products and amass wealth at the expense of general welfare of the public.

Picture 4.1 Former President Ronald Reagan in a cigarette commercial. Such commercials are banned in USA. Figures 4.6a shows the chemical structure of saccharin and related salts. The scientific name of this chemical is Benzoic sulfimide or Ortho sulphobenzamide. The popularity of this toxic chemical was entirely motivated by monetary gain.

Figure 4.6a Chemical structure of saccharin and related salts. A number of companies around the world manufacture saccharin. Most manufacturers use the basic synthetic route described by Remsen and Fahlberg in which toluene is treated with chlorosulfonic acid to produce ortho – and para-toluenesulfonyl chloride. The original route by Remsen & Fahlberg starts with toluene; another route begins with ochlorotoluene. Sulfonation by chlorosulfonic acid gives the ortho and para substituted sulfonyl chlorides. The ortho isomer is separated and converted to the sulfonamide with ammonia. Oxidation of the methyl substituent gives the carboxylic acid, which cyclicizes to give saccharin free acid (Figure 4.6b).

Figure 4.6b Chemical reactions used during saccharin manufacturing. Subsequent treatment with ammonia forms the corresponding toluene-sulfonamides. orthoToluenesulfonamide is separated from the para-isomer (this separation is alternatively performed on the sulfonyl chlorides), and ortho-toluenesulfonamide is then oxidized to orthosulfamoylbenzoic acid, which on heating is cyclized to saccharin (Mitchell and Pearson, 1991). The only producer in the United States currently uses the Maumee process, in which saccharin is produced from purified methyl anthranilate, a substance occurring naturally in grapes. In this process, methyl anthranilate is first diazotized to form 2carbomethoxybenzenediazonium chloride. Sulfonation followed by oxidation yields 2carbomethoxybenzenesulfonyl chloride. Amidation of this sulfonyl chloride, followed by acidification, forms insoluble acid saccharin. Subsequent addition of sodium hydroxide or calcium hydroxide produces the soluble sodium or calcium salt (Mitchell and Pearson, 1991). It should be noted here that saccharin does not become any more acceptable because of its origin being grapes (e.g., U.S. manufacturer). One has merely to cite the example of poppy seed that yields all sorts of toxins upon ‘processing’. Soon after Saccharin® was introduced in the market, it became popular as ‘healthy’ alternative to sugar, especially in third world countries. All patients of diabetes and other sugar-related ailments were put on the Saccharin® diet, often dubbed as ‘diabetic sugar’. Instead of changing direction to move to natural substitutes to sugar, medical professionals were happy to make adjustments to the opposite side of the health spectrum. Diabetes diagnosis, ‘treatment’, ‘sugarfree’ diet, insulin, they all flourished in the west, soon followed by the third world countries. Sugar production started to see decline (per capita) as Saccharin® production climbed dramatically. Today, we consume some 40% less sugar globally than we did in 1970s. Chemical companies started to make huge profits. Also increased are all medical ‘remedies’ of diabetes, amassing profits for the pharmaceutical industries. Figure 4.7 shows the current market share of saccharin in 2001. This share has increased for saccharin, as well as other artificial sweeteners (Figure 4.8).

Figure 4.7 Saccharin consumption share in 2001 (From Khan and Islam, 2016).

Figure 4.8 The dominance of saccharin has been continuing in last 3 decades. (From Islam et al., 2015)

4.6 Delinearized history of Saccharin® and the Money Trail In 1900, the annual production of saccharin in Germany was reported to be 190 tonnes. In 1902, partly at the insistence of beet sugar producers, saccharin production in Germany was brought under strict control, and saccharin was made available only through pharmacies. This was the first effort to link saccharin with some sort of medicinal value. As stated earlier, this linking worked particularly well in the third countries for whom the medical professionals are trained by the curriculum developed by the developed countries. Saccharin use increased during the First World War and immediately thereafter as a result of sugar rationing, particularly in Europe. By 1917, saccharin was a common tabletop sweetener in America and Europe; it was introduced to the Far East in 1923. It was at that time that saccharin was being touted as a ‘healthy alternative’ to sugar, particularly for people that contracted non-genetic diabetes. The consumption of saccharin continued between the Wars, with an increase in the number of products in which it was used. The shortage of sugar during the Second World War again produced a significant increase in saccharin usage. In the early 1950s, calcium saccharin was introduced as an alternative soluble form (Mitchell and Pearson, 1991). Both forms of saccharin (Calcium and sodium) were banned in 1977 in the U.S., but

reinstated subject to strict labelling stating: “Use of this product may be hazardous to your health, this product contains saccharin which has been determined to cause cancer in laboratory animals”. It was known from early on that it interferes with normal blood coagulation, blood sugar levels and digestive function. They are banned in France, Germany, Hungary, Portugal, Spain, and banned as a food additive in Malaysia and Zimbabwe. They are banned as a beverage additive in Fiji, Israel, Peru, and Taiwan. All calcium sachharin are banned from the European Union. In the aftermath of World War II, though, saccharin production remained high. Fundamental changes in the American diet meant fewer people prepared meals at home, relying instead on preprocessed food. Presweetened products, often containing inexpensive saccharin—the output of an increasingly large food-processing industry—alarmed nutritionists, regulators, and health officials. The same 1950’s that brought a renewed hope for the country after two decades of Depression and War brought about the devastation of artificial food culture. From that period onward, food mainly consisted of processed foods, almost all containing artificial sweetener and other artificial chemicals, either as flavouring agent or preservative. In addition, the rise of the fast food industry, i.e., hamburger chains that sprouted up alongside the newly build national highway system did not offer any better fare. Freeing Mom from the kitchen seemed to be the dominant theme as appliances and prepared foods became the ‘norm’. Empowered with the sex revolution of the 60’s, the world started a journey of perpetual degradation, later dubbed as ‘technological disaster’. As America’s economy boomed, women entered the workforce as never before and food lost its natural components. Housewives spent less time in the kitchen, women felt more empowered to ‘not cook’, so food companies came to the rescue with a buffet of processed foods. Foods were purchased in a can, package or pouch. Soups were available as liquids or in dry form. Tang landed on supermarket shelves and frozen dinners laid on trays in front of TV sets. TV dinners were introduced in 1953 by Swanson and with a flick of a wrist one could turn back the foil to display turkey in gravy, dressing, sweet potatoes and peas ready in about 30 minutes – all with no dishes to wash. Also, came “Better Living Through Chemistry” – a slogan that persists until today (Islam et al., 2015). Just like any other technology development scheme in Europe, this change in processing came from the demand of the Army during WWII to provide needed ready-to-eat meals. The food industry responded by ramping up new technologies in canning and freeze-drying to feed the troops. The world has not seen such a great surge of food technology since the time of another wartime hero, Napoleon. The marketing of these foods presented a challenge, however. At first, many of them were less than palatable, so food companies hired home economists to develop fancy recipes and flooded magazines, newspapers and T.V. with ads to broadcast their virtues. T.V. itself became the greatest disinformation vehicle known to mankind, allowing everyone to believe in the most preposterous kind of fairytale stories. They also bought television sets in record numbers and watched shows that represented their new idealized lives like Ozzie and Harriet and Leave It to Beaver. Beaver’s mother, June Cleaver was depicted as a housewife freed from household chores and often was serene and perfectly dressed with pearls and high heels pushing a vacuum cleaner and putting meals on the family

table. This followed with the Baby Boomer Generation. Fifty million babies were born from 1945 to 1960. Food marketing shifted to kids with Tony the Tiger and fish sticks leading the campaign. Fast food had its beginnings strengthened in 1955 when Ray Kroc bought a hamburger stand from the McDonald’s brothers in San Bernadino, California. Disneyland opened in 1955 and was so popular they ran out of food on the first day. In 1958, the American scientist, Ancel Keys started a study called the Seven Countries Study, which attempted to establish the association between diet and cardiovascular disease in different countries. The study results indicated that in the countries where fat consumption was the highest also had the most heart disease. This suggested the idea that dietary fat caused heart disease. He initially studied 22 countries, but only reported on seven: Finland, Greece, Italy, Japan, The Netherlands, United States, and Yugoslavia. The problem was that he left out: Countries where people eat a lot of fat but have little heart disease, such as Holland and Norway and France. Countries where fat consumption is low but the rate of heart disease is high, such as Chile. Keys only used data from the countries that supported his theory. Instead, if he understood the difference between real and artificial, this paradox could be solved easily. However, instead of cutting off artificial from the diet, the entire culture was based on cutting out natural fat and sugar and replacing with artificial fat and artificial sweeteners. Soon enough, scientists began to discover the carcinogenic roles of saccharin. After all, it is based on toluene, a known carcinogen. While saccharin consumption increased, the debate over its safety was never truly settled. Science, to the public, had issued too many contradictory or inconclusive opinions, so when the decision about saccharin fell to individuals, most responded to their desire for a no-consequences sweetener. Partly in response to growing unease among regulators and the public, the U.S. Congress passed the Food Additives Amendment in 1958. In preparing its legislation, the Congress heard testimony from members of the scientific community. For the first time in connection with food additives, scientists used the c-word: cancer. Representative James J. Delaney, a Democrat from New York, pushed hard for the addition of language specifically outlawing carcinogens. In its final form the “Delaney Clause” required the U.S. Food and Drug Administration (F.D.A.) to prohibit the use of carcinogenic substances in food. Seemingly uncontroversial at the time – who would support adding cancer-causing agents to food? – it later proved contentious. In 1977, the U.S.A. banned saccharin. The basis for the proposed ban was a study that documented an increase in cancer in rats being fed saccharin. The “Delaney clause” of the Food Additive Amendments to the Federal Food, Drug, and Cosmetic Act states that no substance can be deemed safe if it causes cancer in humans or animals. Legislators had disastrously underestimated the data necessary to definitively declare a substance carcinogenic. The same year, the Cumberland Packing Corporation introduced

Sweet’N Low, a mixture of saccharin and cyclamate, another artificial sweetener. The two chemicals balanced each other, with cyclamate blunting the bitter aftertaste of saccharin. Sweet’N Low arguably tasted more like real sugar, and those little pink packets brought artificial sweeteners into diners and coffee shops. Meanwhile, the use of artificial sweeteners continued to increase among weight-conscious consumers. Between 1963 and 1967 artificially sweetened soft drinks (Coca-Cola’s Tab, for example) nearly tripled their market share, growing to over 10% of the soda market. In 1882 Constantin Fahlberg had declared saccharin harmless because he suffered no adverse effects 24 hours after taking a single dose. Similarly, Harvey Washington Wiley’s turn-of-thecentury “poison squads” had declared a substance safe if the tester – a human guinea pig – remained healthy after ingestion. But post–World War II health science had begun investigating subtler, long-term effects. Research methodology had changed accordingly: studies observed a longer span of time, for example, and tried to control for a wider range of variables. Researchers shifted away from unstructured human testing toward animal testing that included control groups. Such research produced more and better data but increased complexity. No longer could a substance be labeled simply “poison” or “not poison.” The results of these sophisticated tests demanded sophisticated interpretation. In the late 1960s three trends converged: increasing government regulation in the foodprocessing industry, the rise of artificial sweeteners, and the growing complexity and sophistication of health science. One of the first results of this convergence was the ban on cyclamate. Two 1968 studies linked the chemical to bladder cancer. The F.D.A. cited the Delaney Clause in recommending a ban, which was enacted the following year. That left only one artificial sweetener on the market: saccharin. In 1970 oncologists at the University of Wisconsin Medical School published the results of a clinical study showing a higher instance of bladder cancer among rats who consumed saccharin daily. Subsequent tests seemed to support the initial results, and in 1972 the F.D.A. removed saccharin from the list of food additives “generally recognized as safe.” Peter B. Hutt, chief legal counsel for the F.D.A., stated that, “If it causes cancer—whether it’s 875 bottles a day or 11—it’s going off the market” (Islam et al., 2015). Saccharin producers and commercial consumers recognized the F.D.A.’s move as a precursor to an outright ban. Large chemical companies—Monsanto, Sherwin-Williams, and Lakeway Chemicals—began assembling their own evidence to oppose prohibition. Soda companies expected a painful financial hit, as did makers of diet food. But they also knew the process could take years, as the F.D.A. ordered new tests, analyzed the data, and—crucially— responded to public and political pressure. By 1977 a saccharin ban looked likely. The Cumberland Packing Corporation, which had presciently reformulated Sweet’N Low in the shadow of the cyclamate ban, vowed to fight any regulation. Marvin Eisenstadt, the president of the company, appeared on television and radio to argue his case. He denied the scientific validity of animal testing and declared access to saccharin a consumer right. He helped draft a two-page ad from the Calorie Control Council, the industry group he headed. The ad appeared in the New York Times explaining “why the

proposed ban on saccharin is leaving a bad taste in a lot of people’s mouths.” The ad described the ban as “another example of big government” and recommended action. “Fortunately, we can all conduct our own experiment in this matter. It’s called an experiment in democracy …. Write or call your congressman today and let him know how you feel about a ban on saccharin.” (Islam et al., 2015). In the week after the saccharin ban went into effect in 1977, Congress received more than a million letters. Marvin Eisenstadt and other public relations–savvy producers had turned the saccharin debate into a PR operation, and the public had responded. The Delaney Clause, as the F.D.A. interpreted it, required a ban on any known carcinogen in the food supply. But the original legislation failed to account for the complexity of scientific data. The clause’s premise of scientific consensus based on objective evidence and shared expertise no longer applied to the real world, if it ever had. Scientists could not agree on fundamental questions: What is a carcinogen? What daily dosage of a chemical might be reasonable for testing toxicity? Did the elevated risk of cancer in rats translate to an elevated risk in humans? Health science could not yet answer those questions definitively. But in the absence of incontrovertible scientific evidence, Marvin Eisenstadt could frame the debate as average citizens versus an encroaching big government. The F.D.A. understood the weakness of the existing laws and breathed a sigh of relief when, a week after the ban, Senator Ted Kennedy of the Senate Subcommittee on Health and Scientific Research moved to forestall the ban. The Saccharin Study and Labeling Act passed that year, declaring that all saccharin products would carry a warning label. It also imposed a two-year moratorium on any government action to remove saccharin from the market. More studies were needed, according to Congress. In suspending the proposed saccharin ban, Congress ordered that products containing the popular sweetener must carry a warning about its potential to cause cancer. The F.D.A. formally lifted its proposal to ban the sweetener in 1991 based on new studies, and the requirement for a label warning was eliminated by the Saccharin Notice Repeal Act in 1996. In response, Sweet’N Low sales skyrocketed. Those sales included longtime buyers stocking up in case of a ban, but the free publicity also brought in new customers. By 1979, 44 million Americans used saccharin daily. Consumers voted with their dollars. Congress renewed the moratorium every two years until 2000, when a National Institute of Environmental Health Sciences (NIEHS) study declared the earlier research invalid. The high dosages of saccharin given to the rats were a poor analog for human consumption, as rat digestion works differently from that of humans. The NIEHS recommended that Congress repeal the Labeling Act, officially declaring saccharin safe for human consumption. Finally, though, it was not government regulation that toppled saccharin from its throne as king of the artificial sweeteners—at least not directly. The threat of a saccharin ban led producers to research alternatives. While saccharin—300 times sweeter than sugar—languished in the shadow of a potential ban, a new generation of artificial sweeteners flourished. In 1965, Aspartame®, which is 200 times sweeter than sugar, was discovered; while in 1976, sucralose

—600 times sweeter–and in 2002, neotame—7,000 to 13,000 times sweeter than sugar – were discovered. Today, saccharin, once the undisputed king of artificial sweeteners, lags behind its newer counterparts, replaced by the next sweetest thing. One of the most distinctive features of the world sweetener market in recent years has been the growing realization of the economic attractiveness of blending sweeteners; both intense-intense blends and intense-caloric blends. With the fall in the relative price of intense sweeteners noted in Table 4.7, alongside with the introduction of third generation sweeteners as sucralose, alitame and stevioside, and a relaxtion of regulations, for example, in adoption of the 1996 Sweetener Directive in the EU, the trend towards blending intense sweeteners has continued. Table 4.7 Commodity price over last few decades (from Islam et al., 2015). Commodity World Raw Sugar Price Saccharin Cyclamante (non US) Aspartame (non US)

1998 127,4 91,9 94,2 52,0

1999 90,0 81,8 137,2 48,3

2000 117,7 76,7 145,8 37,0

2001 123,7 72,7 123,2 36,0

The same economic considerations and the desire to save money that have tempted EU food and beverage manufactures into using more intense sweeteners have also been observed. Regardless of prohibitive legislation, in other parts of the world such as Africa, Eastern Europe and the former Soviet states, blending is increasing in all parts of the world. Basically there are three main benefits that can be obtained by blending sweeteners: flavor-masking, enhanced potency, and sweetener synergy. It turns out that these are also the most addictive chemicals known to date. This addiction factor became the reason for best selling feature of any chemical. Information available in 1995 indicated that saccharin was produced in 20 countries, calcium saccharin was produced in five countries, and sodium saccharin was produced in 22 countries (Chemical Information Services, 1995). Table 4.9 shows how new alternatives to saccharin have been introduced at a higher price. They contain a higher profit margin and a greater chance of monopolization over the artificial sweetener market. This is reflected in the fact that new chemicals have lower cost of production attached to them. In comparison to sugar, intense sweetener are much more cheaper on sugar equivalent bases. The cost effectiveness in terms of sugar sweetness is one advantage of intense sweeteners. This do not reflect other properties, such as, bulking effects which are important for many food applications. Only sugar or starch sweeteners can provide those bulking effects. Intense sweeteners are unable to. The cost effectiveness of saccharine and cyclamate are unbeatable due to their cheap synthetic processes. The comparison of the sweetener price related of intense sweetener to sugar is given in Figure 4.9.

Figure 4.9 Cost per tonne for various sugar products (2003 value), from Islam et al., 2015). Table 4.8 Prices of various artificial sweeteners (From Islam et al., 2015). Sweetener Acesulfame Aspartame Glycyrrhizin Cyclamate Sacharain Sucralose Stevioside

Price per kg 80 Euro 40 Euro 50 Euro 4.5 Euro 6.7 Euro 139 Euro 50 Euro

Today, China is the world’s largest producer of saccharin, accounting for 30–40% of world production, with an annual production of approximately 18,000 tonnes in recent years. Its exports amounted to approximately 8,000 tonnes. In 1995, the United States produced approximately 3,400 tonnes of saccharin and its salts, and Japan produced approximately 1,900 tonnes. In the past, several western European companies produced sodium saccharin; however, by 1995, western European production had nearly ceased due to increasing imports of lower-priced saccharin from Asia (Bizzari et al., 1996). In 2001, Europe recorded the strongest growth in saccharin demand, with demand up by 13% regarding to higher consumption in France, Spain and Italy, which more than outweighed declines in the U.K. and Germany. The growth in saccharin demand is driven by the growing popularity of blends within the E.U., as well as rising demand from Eastern Europe. After a respite in 2000, Chinese exports of saccharin surged ahead in 2001, though local sales declined. This has squeezed other saccharin producing countries, some of which have gone out of business. During 1999 the Chinese media reported that the government had ordered the closure of nine of the 14 major saccharin plants, with the effect of reducing the overall production capacity from about 47,000 tons (14.1 million tons sugar equivalent) to around 20,000 tons (6.0 million tons sugar equivalents). The annual production capacity for each of

the 14 major saccharin plants in operation in China ranges from 500 tons to over 10,000 tons. According to Press News, the Chinese government intented to limit the saccharin production to about 24,000 tons (7,2 million tons sugar equivalent) together with a reduction in consumption to about 8,000 40 tons, which is about 60% of the current saccharin consumption level in China. Finally, only smaller factories have been closed taking only 3,000 tons (0.9 million tons sugar equivalents) of capacity out of the market. Table 4.10 shows global exports of saccharin for various countries. They all show tremendous growth over last few decades.

Table 4.10 Global exports of saccharin http://www.usitc.gov/publications/701_731/pub4077.pdf). Source United States

(from

USITC

publication,

2003 2004 2005 2006 2007 Quantity (1,000 pounds) 365 187 2,090 2,210 1,822

China Nonsubject exporting countries: South korea

43,510 42,482 33,293 35,514 35,496

Germany Taiwan Netherlands Japan Singapore Belgium Spain India United kingdom Switzerland France Austria Turkey South Africa All other Total nonsubject exporting countries Total Regions EU15(external trade) EU27(external trade)

1,931 1 584 549 538 324 527 214 428 526 159 126 6 34 403 10,271 54,146 1,797 1,360

3,922 4,417 4,256 3,933 5,117

Note.-Export figures for HTS subheading 2925.11. Source: Global Trade Atlas

4.7 The Culture of Aspartame

2,564 150 606 372 51 564 589 663 139 254 276 57 22 43 545 11,311 53,980 2,174 1,812

2,156 612 778 557 44 542 505 262 551 367 203 62 8 37 481 11,420 46,803 1,975 1,587

2,410 732 952 746 0 324 542 305 335 240 209 29 10 106 366 11,241 48,965 2,125 1,287

3,062 878 875 802 586 478 434 388 353 289 280 216 198 177 761 14,895 52,213 2,440 1,709

As soon it was discovered, Saccharin® does not give diabetes to its consumers, it gives cancer, it was deemed that the time for a new line of research to invent another alternative to sugar had arrived. None other than a neurotoxin invented through research in biological warfare became the best candidate to salvage the humanity suffering from sugar shock. Aspartame® was discovered and it was far 200 times sweeter than sugar. More importantly, it needed no natural raw material – not even petroleum chemicals, was sweeter than Saccharin® and logically it would also be more addictive. Most importantly, Aspartame® is not Saccharin®, and it became the perfect candidate for ‘Saccharine-free’ diet. Aspartame® cannot be linked to cancer, but it is linked to freezing of the brain. After all, it is a neurotoxin. Scientifically, nothing is natural about Aspartame®, from the source to the process involved. Yet, the best even the nutritionists could come up with is, ‘overconsumption’ of such products would increase risk for heart attack and stroke, causes breast cancer, colon cancer, and increases harmful cholesterol, i.e., LDL that thickens arteries. The original invention document made Aspartame® an excellent neurotoxin fit for biological warfare. The following are well known contributions of Aspartame®: Marked changes in appetite and weight as reflected by paradoxical weight gain or severe loss of weight; Excessive insulin secretion and depletion of the insulin reserve; Possible alteration of cellular receptor sites for insulin, with ensuing insulin resistance; Neurotransmitter alterations within the brain and peripheral nerves; The toxicity of each of the three components of Aspartame® (phenylalanine; aspartic acid: the methylester, which promptly becomes methyl alcohol or methanol), and their multiple breakdown products after exposure to heat or during prolonged storage.

4.7.1 Delinearized History of Aspartame Aspartame, which is by far the most prominent artificial sweetener currently used in diet products, is also the most controversial of them all. Its origins are questionable, to say the least. Many claim it never should have been allowed on the market. The few who argue it is safe, have very little ground to stand on for those educated on how it became approved. Many of the studies done to determine the safety of Aspartame®, in the process of its approval as a food additive, have had severe conflicts of interest mainly due to inappropriate by Searle, the very same company that produces NutraSweet (their “street name” for Aspartame®). Dr. Robert Walton investigated the claims made that Searle essentially bought their way into the market. The results he found were quite shocking. In the 166 studies that he found to have relevance in regard to human safety, 74 of those studies had been funded by Searle. The 92 remaining studies were funded independently. Unsurprisingly, of the 74 studies that were funded by Searle, 100% of them claimed that Aspartame® was safe for human use. As far as the independently funded studies, 92% of them

identified health concerns of Aspartame®, and found it to be unsafe for human consumption. The sugarlike taste of Aspartame® was discovered accidentally by James Schlatter, an American drug researcher at G.D. Searle and Co. in 1965. While working on an antiulcer drug, he inadvertently spilled some APM on his hand. Figuring that the material was not toxic, he went about his work without washing it off. He discovered A.P.M.’s sweet taste when he licked his finger to pick up a piece of weighing paper. This initial breakthrough then led the company to screen hundreds of modified versions of A.P.M. The company pursued and was granted United States patent 3,492,131 and various international patents, and the initial discovery was commercialized. The U.S. patent expired in 1992, and the technology is now available to any company who wants to use it. Aspartame has been marketed since 1983 by Searle under the brand names NutraSweet’ and Equal’. Currently, NutraSweet’ is a very popular ingredient and is used in more than 4,000 products, including chewing gum, yogurt, diet soft drinks, fruit-juices, puddings, cereals, and powdered beverage mixes. In the U.S. alone, NutraSweet®’s sales topped $705 million in 1993, according to the company. One of the earliest tests, done by the University of Wisconsin in 1967 by Dr. Harold Waisman, had been conducted on monkeys who drank milk, which contained Aspartame®. Of the seven monkeys being fed the mixture, one died and five others experienced grandular seizures. Despite these early warning signs, Searle pushed on. In 1971, a neuroscientist by the name of Dr. John Olney, conducted several studies which showed that the aspartic acid found in Aspartame®, caused holes in the brains of baby mice. Later, one of Searle’s own researchers conducted a similar study and concluded the same results as the ones demonstrated by Dr. Olney. Again, Searle pushed on. In 1976, an F.D.A. investigation of Searle was initiated, sparked by the many concerns that Searle’s personal studies on Aspartame® were inconsistent with research from independent studies. The investigation results found Searle’s tests were not only full of inaccuracies, but also manipulated data. An investigator involved was quoted as stating they “had never seen anything as bad as Searle’s testing” (Reiley, 2014). Shortly after the investigation, the F.D.A. sent a formal request to the U.S. Attorney’s office to begin grand jury proceedings. Not surprisingly, one of the most significant events of this procession saw Samuel Skinner, the U.S. Attorney in charge of the investigation, resigning from the attorney’s office and taking a position within Searle’s law firm, allowing Searle to buy themselves out of a bad situation. After many years of quibbling, the F.D.A. initially approved Aspartame®’s use as a sweetener in 1980. However, numerous objections surfaced and most scientific evidence showed the best application of Aspartame being as neurotoxin. However, the F.D.A. and the Centers for Disease Control concluded in 1984 that the substance was safe and did not represent a widespread health risk. This conclusion was further supported by the American Medical

Association in 1985, and Aspartame® has been gaining market share ever since. In addition to its use in the United States, Aspartame® has also been approved for use in over 93 countries around the world. In 1985, “G.D. Searle and Co” (mentioned as Searle above) became the pharmaceutical unit of Monsanto – the company that received full protection in 2013 from U.S. President Barack Obama. It was also the same company that developed the controversial drug, Celebrex. Figure 4.10 shows Aspartame® consumption over the years for various regions. This figure shows that Aspartame consumption slowed down in 1999 for the first time in history. The real growth has been in the areas of sports drink, fruit juices, and vitamin-added water. Ironically, these drinks have other components that are even more toxic than Aspartame®. In addition, there has been an increased trend of blending with other artificial sweeteners. Figure 4.11 shows the trend in major artificial sweeteners.

Figure 4.10 Aspartame market growth since 1984 (From Khan and Islam, 2016).

Figure 4.11 Market share of various artificial sweeteners (from Islam et al., 2015).

4.7.2 Timeline This section provides a timeline of the development of the approval and use of Aspartame® and Neotame. DECEMBER 1965 While working on an ulcer drug, a chemist at pharmaceutical manufacturer G.D. Searle accidentally discovers Aspartame®, a substance that is 180 times sweeter than sugar, yet has no calories. SPRING 1967 Searle begins safety tests, necessary for F.D.A. approval.

AUTUMN 1967 G.D. Searle approaches eminent biochemist Dr. Harry Waisman, director of the University of Wisconsin’s Joseph P Kennedy Jr Memorial Laboratory of Mental Retardation Research and a respected expert in the toxicity of phenylalanine (which comprises 50 per cent of the Aspartame® formula), to conduct a study of the effects of Aspartame® on primates. Of seven monkeys fed Aspartame® mixed with milk, one dies and five others had grand mal epileptic seizures. SPRING 1971 Dr. John Olney, professor of neuropathology and psychiatry at Washington University in St Louis School of Medicine, whose research into the neurotoxic food additive monosodium glutamate (MSG, a chemical cousin of Aspartame®) was responsible for having it removed from baby foods, informs Searle that his studies show that aspartic acid, one of the main constituents of Aspartame®, causes holes in the brains of infant mice. One of Searle’s researchers, Ann Reynolds, confirms Olney’s findings in a similar study. FEBRUARY 1973 Searle applies for F.D.A. approval and submits over 100 studies it claims support Aspartame®’s safety. Neither the dead monkeys nor the mice with holes in their brains are included in the submission. 12 SEPTEMBER 1973 In a memorandum, Dr. Martha M Freeman of the F.D.A. Division of Metabolic and Endocrine Drug Products criticises the inadequacy of the information submitted by Searle with particular regard to one of the compound’s toxic breakdown products, diketopiperazine (D.K.P.). She recommends that marketing of Aspartame® be contingent upon the sweetener’s proven clinical safety. 26 JULY 1974 F.D.A. commissioner Dr. Alexander Schmidt grants Aspartame® its first approval as a ‘food additive’ for restricted use in dry foods. This approval comes despite the fact that his own scientists found serious deficiencies in the data submitted by Searle. AUGUST 1974 Before Aspartame® can reach the marketplace, Dr. John Olney, James Turner (attorney, consumer advocate and former ‘Nader’s Raider’ who was instrumental in removing the artificial sweetener cyclamate from theU.S.market), and the group Label Inc. (Legal Action for Buyers’ Education and Labeling) file a formal objection to Aspartame®’s approval with the F.D.A., citing evidence that it could cause brain damage, particularly in children. JULY 1975 Concerns about the accuracy of test data submitted to the F.D.A. by Searle for a wide range of products prompt Schmidt to appoint a special task force to examine irregularities in 25 key

studies for Aspartame® and Searle drugs Flagyl, Aldactone and Norpace. 5 DECEMBER 1975 Searle agrees to an inquiry into Aspartame® safety concerns. Searle withdraws Aspartame® from the market pending its results. The sweetener remains off the market for nearly 10 years while investigations into its safety and into Searle’s alleged fraudulent testing procedures are ongoing. However, the inquiry board does not convene for another four years. 24 MARCH 1976 The F.D.A. task force completes its 500-page report on Searle’s testing procedures. The final report notes faulty and fraudulent product testing, knowingly misrepresented product testing, knowingly misrepresented and ‘manipulated’ test data, and instances of irrelevant animal research in all the products reviewed. Schmidt said: ‘[Searle’s studies were] incredibly sloppy science. What we discovered was reprehensible.’ JULY 1976 The F.D.A. forms a new task force, headed by veteran inspector Jerome Bressler, to further investigate irregularities in Searle’s Aspartame® studies uncovered by the original task force. The findings of the new body would eventually be incorporated into a document known as the Bressler Report. 10 JANUARY 1977 F.D.A. chief counsel Richard Merrill formally requests the U.S. Attorney’s office to begin grand jury proceedings to investigate whether indictments should be filed against Searle for knowingly misrepresenting findings and ‘concealing material facts and making false statements’ in Aspartame® safety tests. This is the first time in the F.D.A.’s history that it requests a criminal investigation of a manufacturer. 26 JANUARY 1977 While the grand jury investigation is underway, Sidley & Austin, the law firm representing Searle, begins recruitment negotiations with Samuel Skinner, the U.S. attorney in charge of the investigation. Skinner removes himself form the investigation and the case is passed to William Conlon. 8 MARCH 1977 Searle hires prominent Washington insider Donald Rumsfeld as its new C.E.O. to try to turn the beleaguered company around. A former member of Congress and defence secretary in the Ford administration, Rumsfeld brings several of his Washington colleagues in as top management. 1 JULY 1977 Samuel Skinner leaves the U.S. Attorney’s office and takes a job with Searle’s law firm. Conlon takes over Skinner’s old job. 1 AUGUST 1977

The Bressler Report is released. It focuses on three key Aspartame® studies conducted by Searle. The report finds that in one study 98 of the 196 animals died but were not autopsied until later dates, making it impossible to ascertain the actual cause of death. Tumours were removed from live animals and the animals placed back in the study. Many other errors and inconsistencies are noted. For example, a rat was reported alive, then dead, then alive, then dead again. Bressler comments: ‘The question you have got to ask yourself is: why wasn’t greater care taken? Why didn’t Searle, with their scientists, closely evaluate this, knowing full well that the whole society, from the youngest to the elderly, from the sick to the unsick. will have access to this product.’ The F.D.A. creates yet another task force to review the Bressler Report. The review is carried out by a team at the F.D.A.’s Center for Food Safety and Applied Nutrition and headed by senior scientist Jacqueline Verrett. 28 SEPTEMBER 1977 The F.D.A. publishes a report exonerating Searle of any wrongdoing in its testing procedures. Jacqueline Verrett will later testify to the U.S. Senate that her team was pressured into validating data from experiments that were clearly a ‘disaster’. 8 DECEMBER 1977 Despite complaints from the Justice Department, Conlon stalls the grand jury prosecution for so long that the statute of limitations on the Aspartame® charges runs out and the investigation is dropped. Just over a year later Conlon joins Searle’s law firm, Sidley & Austin. 1978 The journal Medical World News reports that the methanol content of Aspartame® is 1,000 times greater than most foods under F.D.A. control. In high concentrations methanol, or wood alcohol, is a lethal poison. 1 JUNE 1979 The F.D.A. finally establishes a public board of inquiry (PBOI), comprising three scientists whose job it is to review the objections of Olney and Turner to the approval of Aspartame® and rule on safety issues surrounding the sweetener. 1979 In spite of the uncertainties over Aspartame®’s safety in the US, Aspartame® becomes available, primarily in pharmaceutical products, in France. It is sold under the brand name Canderel and manufactured by the food corporation Merisant. 30 SEPTEMBER 1980 The F.D.A.’s PBOI votes unanimously against Aspartame®’s approval, pending further investigations of brain tumours in animals. The board says it ‘has not been presented with proof of reasonable certainty that Aspartame® is safe for use as a food additive’. 1980

Canderel is now marketed throughout much of Europe (but not in the UK) as a low-calorie sweetener. JANUARY 1981 Rumsfeld states in a Searle sales meeting that he is going to make a big push to get Aspartame® approved within the year. Rumsfeld vows to ‘call in his markers’ and use political rather than scientific means to get the F.D.A. on side. 20 JANUARY 1981 Ronald Reagan is sworn in as president of the US. Reagan’s transition team, which includes Rumsfeld, nominates Dr. Arthur Hull Hayes Jr to be the new F.D.A. commissioner. 21 JANUARY 1981 One day after Reagan’s inauguration, Searle re-applies to the F.D.A. for approval to use Aspartame® as a food sweetener. MARCH 1981 An F.D.A. commissioner’s panel is established to review issues raised by the P.B.O.I. 19 MAY 1981 Arthur Hull Hayes Jr., appoints a five-person commission to review the PBOI’s decision. Three of the five F.D.A. scientists on it advise against approval of Aspartame®, stating on the record that Searle’s tests are unreliable and not adequate to determine the safety of Aspartame®. Hayes installs a sixth member on the commission, and the vote becomes deadlocked. 15 JULY 1981 Hayes ignores the recommendations of his own internal F.D.A. team, overrules the PBOI findings and gives initial approval for Aspartame® to be used in dry products on the basis that it has been shown to be safe for its proposed uses. 22 OCTOBER 1981 The F.D.A. approves Aspartame® as a tabletop sweetener and for use in tablets, breakfast cereals, chewing gum, dry bases for beverages, instant coffee and tea, gelatines, puddings, fillings, dairy-product toppings and as a flavour enhancer for chewing gum. 1982 The Aspartame®-based sweetener Equal, manufactured by Merisant, is launched in the US. 15 OCTOBER 1982 The F.D.A. announces that Searle has filed a petition for Aspartame® to be approved as a sweetener in carbonated beverages, children’s vitamins and other liquids. 1983

Searle attorney Robert Shapiro gives Aspartame® its commercial name, NutraSweet. The name is trademarked the following year. Shapiro later becomes president of Searle. He eventually becomes president and then chairman and C.E.O. of Monsanto, which will buy Searle in 1985. 8 JULY 1983 Aspartame is approved for use in carbonated beverages and syrup bases in the U.S. and, three months later, Britain. Before the end of the year Canderel tablets are launched in the UK. Granular Canderel follows in 1985. 8 AUGUST 1983 James Turner, on behalf of himself and the Community Nutrition Institute, and Dr. Woodrow Monte, Arizona State University’s director of food science and nutritional laboratories, file petitions with the F.D.A. objecting to Aspartame® approval based on possible serious adverse effects from the chronic intake of the sweetener. Monte also cites concern about the chronic intake of methanol associated with Aspartame® ingestion. SEPTEMBER 1983 Hayes resigns as F.D.A. commissioner under a cloud of controversy about his taking unauthorised rides aboard a General Foods jet (General Foods was and is a major purchaser of Aspartame®). He serves briefly as provost at New York Medical College, and then takes a position as senior scientific consultant with Burston-Marsteller, the chief public relations firm for both Searle and Monsanto. AUTUMN 1983 The first carbonated beverages containing Aspartame® go on sale in the U.S. 17 FEBRUARY 1984 The F.D.A. denies Turner and Monte’s requests for a hearing, noting that Aspartame®’s critics had not presented any unresolved safety questions. Regarding Aspartame®’s breakdown components, the F.D.A. says that it has reviewed animal, clinical and consumption studies submitted by the sweetener’s manufacturer, as well as the existing body of scientific data, and concludes that ‘the studies demonstrated the safety of these components’. MARCH 1984 Public complaints about the adverse effects of Aspartame® begin to come in. The F.D.A. requests that the U.S. agency the Centers for Disease Control and Prevention (CDC) begins investigations of a select number of cases of adverse reactions to Aspartame®. 30 MAY 1984 The F.D.A. approves Aspartame® for use in multivitamins. JULY 1984 A study by the state of Arizona Department of Health into Aspartame® is published in the

Journal of Applied Nutrition. It determines that soft drinks stored at elevated temperatures promote more rapid deterioration of Aspartame® into poisonous methanol. 2 NOVEMBER 1984 The C.D.C. review of public complaints relating to Aspartame® culminates in a report, Evaluation of Consumer Complaints Related to Aspartame Use, which reviews 213 of 592 cases and notes that re-challenge tests show that sensitive individuals consistently produce the same adverse symptoms each time they ingested Aspartame®. The reported symptoms include: aggressive behaviour, disorientation, hyperactivity, extreme numbness, excitability, memory loss, loss of depth perception, liver impairment, cardiac arrest, seizures, suicidal tendencies and severe mood swings. The C.D.C. nevertheless concludes that Aspartame® is safe to ingest. On the same day that the C.D.C. exonerates Aspartame®, Pepsi announces that it is dropping saccharin and adopting Aspartame® as the sweetener in all its diet drinks. Others quickly follow suit. 1 OCTOBER 1985 Monsanto, the producer of recombinant bovine growth hormone, genetically engineered soya beans, the pesticide Roundup and many other industrial and agricultural chemicals, purchases Searle for $2.7 billion. 21 APRIL 1986 The U.S. Supreme Court, headed by Justice Clarence Thomas, a former Monsanto attorney, refuses to consider arguments from the Community Nutrition Institute and other consumer groups that the F.D.A. has not followed proper procedures in approving Aspartame®, and that the liquid form of the artificial sweetener may cause brain damage in heavy users of lowcalorie soft drinks. 16 OCTOBER 1986 Turner files another citizen’s petition, this time concerning the risk of seizures and eye damage from Aspartame®. The petition argues that medical records of 140 Aspartame® users show them to have suffered from epileptic seizures and eye damage after consuming products containing the sweetener and that the F.D.A. should ban Aspartame® as an ‘imminent hazard to the public health’. 21 NOVEMBER 1986 The F.D.A. denies Turner’s new petition, saying: ‘The data and information supporting the safety of Aspartame® are extensive. It is likely that no food product has ever been so closely examined for safety. Moreover, the decisions of the agency to approve Aspartame® for its uses have been given the fullest airing that the legal process requires.’ 28 NOVEMBER 1986 The F.D.A. approves Aspartame® for non-carbonated frozen or refrigerated concentrates and single-strength fruit juice, fruit drinks, fruit-flavoured drinks, imitation fruit-flavoured drinks,

frozen stock-type confections and novelties, breath mints and tea beverages. DECEMBER 1986 The F.D.A. declares Aspartame® safe for use as an inactive ingredient, provided labelling meets certain specifications. 1987 An F.D.A. report on adverse reactions associated with Aspartame® states the majority of the complaints about Aspartame® – now numbering 3,133 – refer to neurological effects. 2 JANUARY 1987 NutraSweet’s Aspartame® patent runs out in Europe, Canada and Japan. More companies are now free to produce Aspartame® sweeteners in these countries. 12 OCTOBER 1987 United Press International, a leading global news-syndication organisation, reports that more than 10 federal officials involved in the decision to approve Aspartame® have now taken jobs in the private sector that are linked to the Aspartame® industry. 3 NOVEMBER 1987 A U.S. Senate hearing is held to address the issue of Aspartame® safety and labelling. The hearing reviews the faulty testing procedures and the ‘psychological strategy’ used by Searle to help ensure Aspartame®’s approval. Other information that comes to light includes the fact that Aspartame® was once on a Pentagon list of prospective biochemical-warfare weapons. Numerous medical and scientific experts testify as to the toxicity of Aspartame®. Among them is Dr. Verrett, who reveals that, while compiling its 1977 report, her team was instructed not to comment on or be concerned with the overall validity of the studies. She states that questions about birth defects have not been answered. She also states that increasing the temperature of the product leads to an increase in production of DKP, a substance shown to increase uterine polyps and change blood cholesterol levels. Verrett comments: ‘It was pretty obvious that somewhere along the line, the bureau officials were working up to a whitewash.’ 1989 The F.D.A. has received more than 4,000 complaints from consumers about adverse reactions to the sweetener. 14 OCTOBER 1989 Dr. HJ Roberts, director of the Palm Beach Institute for Medical Research, claims that several recent aircraft accidents involving confusion and aberrant pilot behaviour were caused by ingestion of products containing Aspartame®. 20 JULY 1990

The Guardian publishes a major investigation of Aspartame® and delivers to government officials ‘a dossier of evidence’ that draws heavily on the transcripts of the Bressler Report and demands that the government review the safety of Aspartame®. No review is undertaken. The Guardian is taken to court by Monsanto and forced to apologise for printing its story. 1991 The U.S. National Institutes of Health publishes Adverse Effects of Aspartame: January ‘86 through December ‘90, a bibliography of 167 studies documenting adverse effects associated with Aspartame®. 1992 NutraSweet signs agreements with Coca-Cola and Pepsi stipulating that it is their preferred supplier of Aspartame®. 30 JANUARY 1992 The F.D.A. approves Aspartame® for use in malt beverages, breakfast cereals, and refrigerated puddings and fillings and in bulk form (in large packages like sugar) for tabletop use. NutraSweet markets these bulk products under the name ‘NutraSweet Spoonful’. 14 DECEMBER 1992 NutraSweet’s U.S. patent for Aspartame® expires, opening up the market for other companies to produce the substance. 19 APRIL 1993 The F.D.A. approves Aspartame® for use in hard and soft candies, non-alcoholic flavoured beverages, tea beverages, fruit juices and concentrates, baked goods and baking mixes, and frostings, toppings and fillings for baked goods. 28 FEBRUARY 1994 Aspartame now accounts for the majority (75 per cent) of all the complaints in the U.S. adverse-reaction monitoring system. The U.S. Department of Health and Human Services compiles a report that brings together all current information on adverse reactions attributed to Aspartame®. It lists 6,888 complaints, including 649 reported by the CDC and 1,305 reported by the F.D.A. APRIL 1995 Consumer activist, and founder of anti-Aspartame® group Mission Possible, Betty Martini uses the US’s Freedom of Information Act to force the F.D.A. to release an official list of adverse effects associated with Aspartame® ingestion. Culled from 10,000 consumer complaints, the list includes four deaths and more than 90 unique symptoms, a majority of which are connected to impaired neurological function. They include: headache; dizziness or problems with balance; mood change; vomiting and nausea; seizures and convulsions; memory loss; tremors; muscle weakness; abdominal pains and cramps; change in vision; diarrhea; fatigue and

weakness; skin rashes; deteriorating vision; joint and musculoskeletal pain. By the F.D.A.’s own admission, fewer then 1 per cent of those who have problems with something they consume ever report it to the F.D.A. This means that around 1 million people could have been experiencing adverse effects from ingesting Aspartame®. 12 JUNE 1995 The F.D.A. announces it has no further plans to continue to collect adverse reaction reports or monitor research on Aspartame®. 27 JUNE 1996 The F.D.A. removes all restrictions from Aspartame® use, and approves it as a generalpurpose sweetener’, meaning that Aspartame® can now be used in any food or beverage. NOVEMBER 1996 Drawing on data compiled by the U.S. National Cancer Institute’s Surveillance, Epidemiology and End Results programme, which collects and distributes data on all types of cancer, Olney publishes peer-reviewed research in the Journal of Neuropathology and Experimental Neurology. It shows that brain-tumour rates have risen in line with Aspartame® consumption and that there has been a significant increase in the conversion of less deadly tumours into much more deadly ones. DECEMBER 1996 The results of a remarkable study conducted by Dr Ralph G Walton, professor of clinical psychology at Northeastern Ohio Universities, are revealed. Commissioned by the hard-hitting U.S. national news programme 60 Minutes, it sheds some light on the absurdity of Aspartame®safety studies. Walton reviewed 165 separate studies published in the preceding 20 years in peer-reviewed medical journals. Seventy-four of the studies were industry-funded, all of which attested to Aspartame®’s safety. Of the other 91 non-industry funded studies, 84 identified adverse health effects. Six of the seven non-industry funded studies that were favourable to Aspartame® were from the F.D.A., which has a public record of strong proindustry bias. To this day, the industry-funded studies are the ones that are always quoted to the press and in official rebuttals to Aspartame® critics. They are also the studies given the greatest weight during the approval process and in official safety reviews. 10 FEBRUARY 1998 Monsanto petitions the F.D.A. for approval of a new tabletop sweetener called Neotame. It is around 60 times sweeter than Aspartame® and up to 13,000 times sweeter than sugar. Neotame is less prone to breaking down in heat and in liquids than Aspartame® because of the addition of 3,3-dimethylbutyl, a poorly studied chemical with suspected neurotoxic effects. Strengthening the bond between Aspartame®’s main constituents eliminates the need for a health warning directed at people suffering from PKU. 13 MAY 1998

Independent scientists from the University of Barcelona publish a landmark study clearly showing that Aspartame® is transformed into formaldehyde in the bodies of living specimens (in this case rats), and that this formaldehyde spreads throughout the specimens’ vital organs, including the liver, kidneys, eyes and brain. The results fly in the face of manufacturers’ claims that Aspartame® does not break down into formaldehyde in the body, and bolster the claims of Aspartame® critics that many of the symptoms associated with Aspartame® toxicity are caused by the poisonous and cumulative effects of formaldehyde. OCTOBER 1998 The U.K.’s Food Commission publishes two surveys on sweeteners. The first shows that several leading companies, including St Ivel, Müller and Sainsbury’s, have ignored the legal requirement to state ‘with sweeteners’ next to the name of the product. The second reveals that Aspartame® not only appears in ‘no-sugar added’ and ‘light’ beverages but also in ordinary non-dietetic drinks because it’s three times cheaper than ordinary sugar. 8 FEBRUARY 1999 Monsanto files a petition with the F.D.A. for approval of the general use of Neotame. 20 JUNE 1999 An investigation by The Independent on Sunday reveals that Aspartame® is made using a genetic engineering process. Aspartame component phenylalanine is naturally produced by bacteria. The newspaper reveals that Monsanto has genetically engineered the bacteria to make them produce more phenylalanine. Monsanto claims that the process had not been revealed previously because no modified DNA remains in the finished product, and insists that the product is completely safe; though scientists counter that toxic effects cannot be ruled out in the absence of long-term studies. A Monsanto spokeswoman says that while Aspartame® for the U.S. market is often made using genetic engineering, Aspartame® supplied to British food producers is not. The extent to which U.S. brands of low-calorie products containing genetically engineered Aspartame® have been imported into Britain is unclear. MAY 2000 Monsanto, under pressure – not in the least from the worldwide resistance to genetically manipulated food and ongoing lawsuits – sells NutraSweet to J.W. Childs Associates, a private-equity firm comprised of several former Monsanto managers, for $440 m. Monsanto also sells its equity interest in two European sweetener joint ventures, NutraSweet AG and Euro-Aspartame SA. 10 DECEMBER 2001 The U.K.’s Food Standards Agency requests that the European Commission Scientific Committee on Food conducts an updated review of Aspartame®. The committee is asked to look carefully at more than 500 scientific papers published between 1988 and 2000 and any other new scientific research not examined previously.

9 JULY 2002 The F.D.A. approves the tabletop and general use of Neotame. The ‘fasttrack’ approval raises eyebrows because, historically, the F.D.A. takes at least 10 years to approve food additives. Neotame is also approved for use in Australia and New Zealand, but has yet to be approved in the U.K. 10 DECEMBER 2002 The European Commission Scientific Committee on Food publishes its final report on Aspartame®. The 24-page report largely ignores independent research and consumer complaints, relying instead on frequently cited articles in books and reviews put together by employees or consultants of Aspartame® manufacturers. When independent research is cited, it is generally refuted with industry-sponsored data. An animal study showing Aspartame®’s disruption of brain chemistry, a human study linking Aspartame® to neurophysiological changes that could increase seizure risk, another linking Aspartame® use with depression in individuals susceptible to mood disorder, and two others linking Aspartame® ingestion with headaches are all dismissed. The report’s conclusion amounts to a single sentence: ‘The committee concluded that there is no evidence to suggest that there is a need to revise the outcome of the earlier risk assessment or the [acceptance daily intake] previously established for Aspartame®.’ As with the F.D.A., there are concerns about the neutrality of some of the committee’s members and their links with the International Life Sciences Institute (ILSI), an industry group that funds, among other things, research into Aspartame®. ILSI members include Monsanto, Coca-Cola and Pepsi. 19 FEBRUARY 2003 Members of the European Parliament’s Environment, Public Health and Consumer Policy Committee approve the use of sucralose (see page 50) and an aspartame-acesulfame salt compound (manufactured in Europe by the aspartame-producing Holland Sweetener Company and sold under the name Twinsweet), agreeing to review of the use of both in three years’ time. At the same time, a request by European greens that the committee re-evaluate the safety of Aspartame® and improve the labelling of aspartame-containing products is rejected. MAY 2004 The feature-length documentary Sweet Misery is released on DVD (see http://www.soundandfuryproductions.com). Part-documentary, part-detective story, it includes interviews with people who have been harmed by Aspartame®, as well as credible testimony from advocates, doctors, lawyers and long-time campaigners, including James Turner, H.J. Roberts and renowned neurosurgeon Dr.Russell Blaylock. SEPTEMBER 2004 US consumer group the National Justice League files a $350m class action lawsuit against the

NutraSweet Corporation (the current owner of Aspartame® products), the American Diabetes Association and Monsanto. Some 50 other defendants have yet to be named, but mentioned throughout the lawsuit is the central role of Donald Rumsfeld in helping to get Aspartame® approved through the F.D.A. The plaintiffs maintain that this litigation will prove how deadly Aspartame® is when it is consumed by humans. Little progress has been made so far in bringing the action to court. The NutraSweet Company reopens its plant in Atlanta, Georgia, (dormant since 2003) in order to meet increased demand for its sweetener. Aspartame, sold commercially as NutraSweet, Equal, Equal-Measure, Spoonful, Canderel and Benevia, is currently available in more than 100 countries and used in more than 5,000 products by at least 250 million people every day. Worldwide, the Aspartame® industry’s sales amount to more than $1 billion yearly. The U.S. is the primary consumer. JULY 2005 The Ramizzini Institute in Bologna, a non-profit, private institution set up to research the causes of cancer, releases the results of a very large, long-term animal study into Aspartame® ingestion. Its study shows that Aspartame® causes lymphomas and leukaemia in female animals fed Aspartame® at doses around 20 milligrams per kilogram of body weight, or around half the accepted daily intake for humans. Figure 4.12 shows the chemical structure of Aspartame®. It is important to analyze aspartame compounds in order to appreciate its deadly consequences. Aspartame is comprised of 4 deadly compounds: phenylalanine, aspartic acid, methanol, and Diketopiperazine (DKP). The chemical bond that holds these constituents together is fairly weak. As a result, aspartame readily breaks down into its component parts in a variety of circumstances: in liquids; during prolonged storage; when exposed to heat in excess of 86° Fahrenheit (30° Centigrade); and when ingested. These constituents further break down into other toxic by-products, namely formaldehyde, formic acid and aspartylphenylalanine diketopiperazine (DKP).

Figure 4.12 Chemical structure of Aspartame®. 4.7.3.1 Phenylalanine The largest component of Aspartame® is called phenylalanine, making up 50% of the artificial sweetener. Phenylalanine is an amino acid normally found in the brain. Persons with the genetic disorder phenylketonuria (PKU) cannot metabolize phenylalanine. This leads to dangerously high levels of phenylalanine in the brain (sometimes lethal). It has been shown that ingesting Aspartame®, especially along with carbohydrates, can lead to excess levels of phenylalanine in the brain

even in persons who do not have PKU. A subtler point is that phenylalanine is the artificial version of the amino acid that is present in brain. It is equivalent to replacing a natural chemical with artificial one. When it comes to the brain, it is equivalent to adding a neurotoxin. A similar effect has been demonstrated with natural vitamin C (that prevents cancer) and artificial vitamin C (ascorbic acid that can cause cancer). This presents a serious problem since high levels of phenylalanine in the brain can cause the levels of serotonin to decrease, leading to emotional disorders such as depression. It has been shown in human clinical trials that phenylalanine levels of the blood are increased significantly in those who chronically use Aspartame®. Even a single use of Aspartame® raised the blood phenylalanine levels. In his testimony before the U.S. Congress, Dr. Louis J. Elsas showed that high blood phenylalanine can be concentrated in parts of the brain and is especially dangerous for infants and fetuses. He also showed that phenylalanine is metabolized much more efficiently by rodents than by humans. Thus, rodent studies showing minimal effects of phenylalanine should be considered with a grain of salt. 4.7.3.2 Aspartic Acid The next deadly component with Aspartame® is aspartic acid. Roughly 40% of it is made up of aspartic acid. Dr. Russell L. Blaylock, a professor of neurosurgery at the Medical University of Mississippi, has cited 500 scientific references to show how excess free excitatory amino acids such as aspartic acid in our food supply are causing serious chronic neurological disorders and a myriad of other acute symptoms. Aspartate (from aspartic acid) acts as a neurotransmitter in the brain but too much of it can kill certain neurons by allowing too much calcium into the cells, triggering an onslaught of free radical damage. This influx triggers excessive amounts of free radicals, which kill the cells. Curiously, the blood brain barrier, which normally protects the brain from excess influx of aspartate (as well as many other toxins) does not fully protect against excess levels of this “excitotoxin” in the blood. Thus, Aspartame®’a ingestion has been associated with a number of neurological defects such as memory loss, multiple sclerosis, headaches, vision problems, dementia, brain lesions, and more. One common complaint of persons suffering from the effect of Aspartame® is memory loss. Ironically, in 1987, G.D. Searle, the manufacturer of Aspartame®, undertook a search for a drug to combat memory loss caused by excitatory amino acid damage. 4.7.3.3 Methanol Methanol (or wood alcohol) is a deadly poison and makes up 10% of Aspartame®. Methanol’s toxicity is well documented. Methanol is gradually released in the small intestine when the methyl group of Aspartame® encounters the enzyme chymotrypsin. Methanol is then broken down into formic acid and formaldehyde in the body. Formaldehyde is a deadly neurotoxin. It is the same material that is used to preserve dead bodies.

An Environmental Protection Agency (EPA) of methanol states that methanol “is considered a cumulative poison due to the low rate of excretion once it is absorbed. In the body, methanol is oxidized to formaldehyde and formic acid; both of these metabolites are toxic.” The EPA recommends a limit of consumption of 7.8 mg/day. To give one some perspective, a one-liter bottle of Diet Coke contains about 56 mg of methanol. Heavy users of aspartamecontaining products consume as much as 250 mg of methanol daily or 32 times the EPA limit! Symptoms from methanol poisoning include headaches, ear buzzing, dizziness, nausea, gastrointestinal disturbances, weakness, vertigo, chills, memory lapses, numbness and shooting pains in the extremities, behavioral disturbances, and neuritis. The most well-known problems from methanol poisoning are vision problems including misty vision, progressive contraction of visual fields, blurring of vision, retinal damage, and blindness. Formaldehyde, on its own, is a known carcinogen, causes retinal damage, interferes with DNA replication and causes birth defects. As pointed out by Dr. Woodrow C. Monte, director of the food science and nutrition laboratory at Arizona State University, “There are no human or mammalian studies to evaluate the possible mutagenic, teratogenic or carcinogenic effects of chronic administration of methyl alcohol” (Rundquist, 2004). Dr. Monte was so concerned about the safety issues of methanol (and Aspartame®) that he filed suit with the F.D.A. requesting a hearing to address these issues. He asked the F.D.A. to … “Slow down on this soft drink issue long enough to answer some of the important questions. It’s not fair that you are leaving the full burden of proof on the few of us who are concerned and have such limited resources. You must remember that you are the American public’s last defense. Once you allow usage (of Aspartame®) there is literally nothing I or my colleagues can do to reverse the course. Aspartame will then join saccharin, the sulfiting agents, and God knows how many other questionable compounds enjoined to insult the human constitution with governmental approval.” Shortly thereafter, the Commissioner of the F.D.A., Arthur Hull Hayes, Jr., approved the use of Aspartame® in carbonated beverages, he then left for a position with G.D. Searle’s public relations firm. When Aspartame® was approved for use, Dr. H.J. Roberts, director of the Palm Beach Institute for Medical Research, had no reason to doubt the F.D.A.’s decision. ‘But my attitude changed,’ he says, ‘after repeatedly encountering serious reactions in my patients that seemed justifiably linked to aspartame’ (The Health Gazette, n.d.,). Twenty years on, Roberts has coined the phrase the ‘aspartame disease’ to describe the wide range of adverse effects he has seen among aspartame-guzzling patients. He estimates that “[h]undreds of thousands of consumers, more likely millions, currently suffer major reactions to products containing aspartame. Today, every physician probably encounters aspartame disease in everyday practice, especially among patients with illnesses that are undiagnosed or difficult to treat.” In 2001, Roberts (2001) published a lengthy series of case studies, containing meticulous details of treatment of 1,200 aspartame-sensitive individuals, or ‘reactors’, encountered in his

own practice. Following accepted medical procedure for detecting sensitivities to foods, Roberts had his patients remove Aspartame® from their diets. With nearly two thirds of reactors, symptoms began to improve within days of removing Aspartame®, and improvements were maintained as long as Aspartame® was kept out of their diet. Roberts’ case studies parallel much of what was revealed in the F.D.A.’s report on adverse reactions to Aspartame® – that toxicity often reveals itself through central nervous system disorders and compromised immunity. His casework shows that Aspartame® toxicity can mimic the symptoms of and/or worsen several diseases that fall into these broad categories. Case studies, especially a large series like this, address some of the issues surrounding realworld use in a way that laboratory studies never can; and the conclusions that can be drawn from such observations aren’t just startling, they are also potentially highly significant. In fact, Roberts believes that one of the major problems with Aspartame® research has been the continued over-emphasis on laboratory studies. This has meant that the input of concerned independent physicians and other interested persons, especially consumers, is “reflexively discounted as ‘anecdotal’”. Many of the diseases listed by Roberts fall into the category of medicine’s ‘mystery diseases’ – conditions with no clear etiology and few effective cures. And while no one is suggesting that Aspartame® is the single cause of such diseases, Roberts’ research suggests that some people diagnosed with, for example, multiple sclerosis, Parkinson’s or chronic fatigue syndrome may end up on a regimen of potentially harmful drugs that could have been avoided if they simply stopped ingesting Aspartame®-laced products. 4.7.3.4 DKP DKP is a breakdown product of phenylalanine that forms when aspartame-containing liquids are stored for prolonged periods. In animal experiments it has produced brain tumours, uterine polyps and changes in blood cholesterol. Before the F.D.A. approved Aspartame®, the amount of DKP in our diets was essentially zero. So, no claim of DKP’s safety can be accepted as genuine until good-quality long-term studies have been performed. No such studies have been done. The following diseases have been linked to Aspartame® toxicity: Cancer Studies have found a dangerous connection between Aspartame® consumption and the development of brain tumors. When Aspartame® breaks down it produces a substance called DKP. As your stomach digests DKP, it produces a chemical that induces the growth of brain tumors. Diabetes Aspartame consumption is extremely harmful to people with diabetes. It makes it more difficult to control sugar levels and aggravate diabetes-related conditions such as retinopathy, cataracts, neuropathy and gastro-paresis. The sweetener also has been known to cause convulsions often mistaken for insulin reactions.

Psychological Disorders Emotional and mood disorders have been linked to Aspartame®. Studies suggest that people with certain emotional problems are more sensitive to Aspartame®. High levels of Aspartame® cause changes in the serotonin levels which can lead to behavioral problems, depression and other emotional disorders. In some cases, the side effects were so dangerous that doctors were forced to put an end to the studies. Hinders Weight Loss Aspartame can be found in diet sodas and most other diet products. However, research indicates that the sweetener increases your hunger and can actually impede your weight loss. Phenylalanine and aspartic acid can cause spikes in insulin levels and force your body to remove the glucose from your blood stream and store it as fat. Aspartame also inhibits the production of serotonin and prevents your brain from signaling to your body that you are full. This can lead to food cravings and make it more difficult for you to lose weight. Birth Defects Aspartame is an excitotoxin or a substance that has the potential to damage or kill cells in the nervous system. The blood-brain barrier is a structure that stops harmful substances from penetrating the brain. The barrier doesn’t completely form until a child is one year old, therefore, before a child is born, its nervous system is vulnerable to any dangerous excitotoxins that the mother may consume. Too much exposure to phenylalanine or aspartic acid may cause irreversible brain damage and other serious birth defects. Vision Problems Methanol, one of Aspartame’s components, can damage the retinas and the optic nerves. Aspartame consumption has been connected to eye pain, blurred vision and, in some cases, blindness. Brain Damage and Seizures Aspartame can change the chemistry of the brain. Formaldehyde, a product of methanol, gathers in certain areas of the brain causing degenerative diseases such as Parkinson’s, Alzheimer’s and ALS. Aspartame consumption can also trigger seizures in both epileptics and other individuals without a history of epilepsy.

4.8 The Honey-Sugar-Saccharin-Aspartame Degradation in Everything HSS®A® is the most notorious accomplishment par excellence born of engineering a myriad of applications of the findings of so-called “New Science” — plastics to textiles to botox — from which intangibles like characteristic time are eternally banished. The HSS®A® label for this pathway generalizes the seemingly insignificant example of the degradation of natural honey to carcinogenic “sweeteners” like Aspartame® because, as Albert Einstein most

famously pointed out, the environmental challenges posed by conventional suppression or general disregard of essential phenomena in the natural environment such as pollination actually threaten the continued existence of human civilization in general. The HSS®A® pathway is a metaphor representing many other phenomena and chins of phenomena that originate from a natural form and become subsequently engineered through many intermediate stages into “new” products. In the following discussion, we lay out how it works. Once it is understood how disinformation works, one can figure out a way to reverse the process by avoiding aphenomenal schemes that lead to ignorance packaged with arrogance. Ever since the introduction of the culture of plastic over a century ago, the public has been indoctrinated into associating an increase in the quality, and/or qualities, of a final product with the insertion of additional intermediate stages of ‘refining’ the product. If honey – taken more or less directly from a natural source, without further processing – was fine, surely the sweetness that can be attained by refining sugar must be better. If the individual wants to reduce their risk of diabetes, then surely further refining of the chemistry of “sweetness” into such products as Saccharin® must be better still. And why not even more sophisticated chemical engineering to further convert the chemical essence of this refined sweetness into forms that are stable in liquid phase, such as Aspartame®? In this sequence, each additional stage is defended and promoted as having overcome some limitation of the immediately previous stage. But at the end of this chain, what is left in, say, Aspartame® of the 200-plus beneficial qualities of honey? Looking from the end of this chain back to its start, how many laboratory rats ever contracted cancer from any amount of honey intake? How many nerves become frozen by taking honey? Honey is known to be the only food that possesses all the nutrients, including water, needed to sustain life. How many true nutrients does Aspartame® have? From the narrowest engineering standpoint, the kinds and number of qualities in the final product at the end of this Honey → Sugar → Saccharin® → Aspartame® chain have been transformed, but from the human consumer’s standpoint of the use-value of “sweet-tasting,” has there been a net qualitative gain going from honey all the way to Aspartame®? From a scientific standpoint, honey fulfils both conditions of phenomenality, namely: origin and process. That is, the source of honey (nectar) is real (even if it means flowers were grown with chemical fertilizers, pesticides, or even genetic alteration), and — even if the bees were subjected to air pollution or a sugary diet — the process is real (honeybees cannot make false intentions, therefore they are perfectly natural). The quality of honey can differ depending on other factors, e.g., chemical fertilizer, genetic alteration, etc., but honey remains real. As we “progress” from honey to sugar, the origin remains real (sugar cane or beet), but the process is tainted with artificial inputs, starting from electrical heating, chemical additives, bleaching, etc. Further “progress” to Saccharin® marks the use of another real origin, but this time the original source (crude oil) is a very old food source compared to the source of sugar. With steady-state analysis, they both appear to be of the same quality. As the chemical engineering continues, we resort to the final transition to Aspartame®. Indeed,

nothing is phenomenal about Aspartame®, as both the origin and the process are artificial. So, the overall transition from honey to Aspartame® has been from 100% phenomenal to 100% aphenomenal. Considering this, what economic calculations are needed to justify this replacement? It becomes clear that without considering the phenomenality feature, that any talk of economics would only mean the “economics” of aphenomenality. Yet, this remains the standard of neo-classical economics. Throughout the modern era, economics has remained the driver of the education system. There is an economies of scale that is developed and applied to determine how far this is taken in each case. For example, honey is perceptibly “sugar” to taste. We want the sugar, but honey is also anti-bacterial and cannot rot. Therefore, the rate at which customers will have to return for the next supply is much lower and slower than the rate at which customers would have to return to resupply themselves with, say, refined sugar. Even worse: to extend the amount of honey available in the market (in many third world countries, for example), sugar is added. The content of this “economic” logic then takes over and drives what happens to honey and sugar as commodities. There are natural limits to how far honey as a natural product can actually be commodified, whereas, for example, refined sugar is refined to become addictive so that the consumer becomes hooked and the producer’s profit is secured. The Education system has been commodified in the modern age and remains the most vulnerable under the new world order that is taking shape at the dawn of the information age. The matter of intention is not considered in the economics of scale, leading to certain questions never being answered. No one asks whether any degree of external processing of what began as a natural sugar source can or will improve its quality as a sweetener. Exactly what that process, or those processes, would be is also unasked. No sugar refiner is worried about how the marketing of his product in excess is contributing to a diabetes epidemic. The advertising that is crucial to marketing this product certainly will not raise this question. Guided by the “logic” of economies of scale, and the marketing effort that must accompany it, greater processing is assumed to be and accepted as being ipso facto good, or better. As a consequence of the selectivity inherent in such “logic,” any other possibility within the overall picture – such as the possibility that as we go from honey to sugar to saccharin to Aspartame®, we go from something entirely safe for human consumption to something cancerously toxic – does not even enter the frame. Such a consideration would prove to be very threatening to the health of a group’s big business in the short term. All this is especially devastatingly clear when it comes to education and natural cognition. Over the last millennium, even after ‘original sin’ has been discredited as aphenomenal, it is widely and falsely believed that natural cognition is backward looking and humans are incapable of finding their own path of knowledge, they must be indoctrinated into being enlightened. Edible natural products in their natural state are already good enough for humans to consume at some safe level and process further internally in ways useful to the organism. We are not likely to consume any unrefined natural food source in excess. However: the refining that accompanies the transformation of natural food sources into processed-food commodities also

introduces components that interfere with the normal ability we have to push a natural food source aside after some definite point. Additionally, with externally processed “refinements” of natural sources, the chances increase that the form in which the product is eventually consumed must include compounds that are not characteristic anywhere in nature and that the human organism cannot usefully process without stressing the digestive system excessively. After a cancer epidemic, there is great scurrying to fix the problem. The cautionary tale within this tragedy is that, if the HSS®A® principle were considered before a new stage of external processing were added, much unnecessary tragedy could be avoided. There are two especially crucial premises of the economics-of-scale that lie hidden within the notion of “upgrading by refining:” (a) unit costs of production can be lowered (and unit profit therefore expanded) by increasing output Q per unit time t, i.e., by driving ∂Q/∂t (temporal rate of change of Q) unconditionally in a positive direction; and (b) only the desired portion of the Q end-product is considered to have tangible economics and, therefore, also intangible social “value,” while any unwanted consequences – e.g., degradation of, or risks to, public health, damage(s) to the environment, etc. – are discounted and dismissed as false costs of production. Note that, if relatively free competition still prevailed, premise (a) would not arise even as a passing consideration. In an economy lacking monopolies, oligopolies, and/or cartels dictating effective demand by manipulating supply, unit costs of production remain mainly a function of some given level of technology. Once a certain proportion of investment in fixed-capital (equipment and ground-rent for the production facility) becomes the norm generally among the various producers competing for customers in the same market, the unit costs of production cannot fall or be driven arbitrarily below a certain floor level without risking business loss. The unit cost thus becomes downwardly inelastic. The unit cost of production can become downwardly elastic, i.e., capable of falling readily below any asserted floor price, under two conditions: a. during moments of technological transformation of the industry, in which producers who are first to lower their unit costs by using more advanced machinery will gain market shares, temporarily, at the expense of competitors; or b. in conditions where financially stronger producers absorb financially weakened competitors. In neoclassical models, which all assume competitiveness in the economy, this second circumstance is associated with the temporary cyclical crisis. This is the crisis that breaks out from time to time in periods of extended oversupply or weakened demand. In reality, contrary to the assumptions of the neoclassical economic models, the impacts of monopolies, oligopolies, and cartels have entirely displaced those of free competition and have become normal rather than the exception. Under such conditions, lowering unit costs of production (and thereby expansion of unit profit) by increasing output Q per unit time t, i.e., by driving ∂Q/∂t unconditionally in a positive direction, is no longer an occasional and exceptional tactical opportunity. It is a permanent policy option: monopolies, oligopolies, and cartels manipulate

supply and demand because they can. Note that premise (b) points to how, where, and why consciousness of the unsustainability of the present order can emerge. Continuing indefinitely to refine nature out by substituting ever more elaborate chemical “equivalents,” hitherto unknown in the natural environment, has started to take its toll. The narrow concerns of the owners and managers of production are at odds with the needs of society. Irrespective of the private character of their appropriation of the fruits of production, based on concentrating so much power in so few hands, production has become far more social. The industrial-scale production of all goods and services as commodities has spread everywhere from the metropolises of Europe and North America to the remotest Asian countryside, the deserts of Africa, and the jungle regions of South America. This economy is not only global in scope but also social in its essential character. Regardless of the readiness of the owners and managers to dismiss and abdicate responsibility for the environmental and human health costs of their unsustainable approach, these costs have become an increasingly urgent concern to societies in general. In this regard, the HSS®A® principle becomes a key and most useful guideline for sorting what is truly sustainable for the long term from what is undoubtedly unsustainable. The human being that is transformed further into a mere consumer of products is a being that has become marginalized from most of the possibilities and potentialities of the fact of his/her existence. This marginalization is an important feature of the HSS®A® principle. There are numerous things that individuals can do to modulate, or otherwise affect, the intake of honey and its impacts. However, there is little – indeed: nothing – that one can do about Aspartame® except drink it. With some minor modification, the HSS®A® principle helps illustrate how the marginalization of the individual’s participation is happening in other areas. What has been identified here as the HSS®A® principle, or syndrome, continues to unfold attacks against both the increasing global striving toward true sustainability on the one hand, and the humanization of the environment in all aspects, societal and natural, on the other. Its silent partner is the aphenomenal model, which invents justifications for the unjustifiable and for “phenomena” that have been picked out of thin air. As with the aphenomenal model, repeated and continual detection and exposure of the operation of the HSS®A® principle is crucial for future progress in developing nature-science, the science of intangibles and true sustainability.

4.9 Assessing the Overall Performance of a Process In order to break out of the conventional analysis introduced throughout the new science era, we will proceed to discuss some salient features of the time domain and present how the overall performance of a process can be assessed by using time as the fourth dimension. Time t here is not orthogonal to the other three spatial dimensions. However, it is no less a dimension for not being mutually orthogonal. Socially available knowledge is also not orthogonal either with respect to time t, or with respect to the other three spatial dimensions. Hence, despite the training of engineers and scientists in higher mathematics that hints, suggests, or implies that

dimensionality must be tied up “somehow” to the presence of orthogonality, orthogonality is not in itself a relationship built into dimensionality. It applies only to the arrangements we have invented to render three spatial dimensions simultaneously visible, i.e., tangible. Between input and output, component phenomena can be treated as lumped parameters, just as, for example, in electric circuit theory, resistance/reactance is lumped in a single resistor, capacitance in a single capacitor, inductance in a single inductor, electromotive potential/force and current of the entire circuit lumped at a power supply, or at special gated junction-points (such as between the base and emitter of a transistor), etc. Similarly, in the economic theory of commodity transactions, relations of exchange in the market lump all “supply” with the seller and all “demand” with the buyer — even though in reality, as everyone knows — there is also a serious question of a “demand” (need for money) on the part of the seller and there is a certain “supply” (of cash) in the hands of the buyer. In Nature, or even within certain highly-engineered phenomena, such as an electric circuit, in which human engineering has supplied all the ambient conditions (source of electrical energy, circuit transmission lines, etc.), even after assuming certain simplifying conditions like a nearzero frequency and virtually direct current flow and very small potential differences, we still have no idea whether the current is continuous or how continuous, nor how stable or uniform the voltage difference is at any point in the circuit. The lumped-parameter approach enables us to characterize the overall result/difference/change at output compared to the input without worrying about the details of what actually happened between the input and the output. Clearly, when natural processes are being considered, such an approach leaves a great deal unexplained and unaccounted for. So long as the computed result match the difference measured between the input and the output, this approach opens the door to imposing any interpretation as a way to account for what happened. The problem with all such standards is that the question of their applicability for measuring something about the process-of-interest is never asked beforehand. Consider the known and very considerable physical difference between the way extremely high-frequency [tinywavelength] EM waves, on the one hand, and much lower-frequency [much-greater wavelength] audible-sound waves on the other hand, each propagate. The meter may be quite reasonable for the latter case. Does it follow, however, that the nanometer — recall that it is based on subdividing the meter into one billion units — is equally reasonable for the former case? The physical reality is that the standard meter bar in Paris actually varies in length by a certain number of picometers or nanometers just within one Earth year. If the process-ofinterest is EM radiation traversing light-years through space, however, variation of the standard metre by one nanometre or even 1000 picometers will make nonsense of whatever measure we assign to something happening in the physical universe at this scale. What the objectivity, externality and uniformity of standards enable is a comparison based on what the human observer can directly see, hear, smell, touch or taste — or, more indirectly, measure — according to standards that can be tangibly grasped within ordinary human understanding. However, is science reducible to that which may be tangibly grasped within ordinary human understanding? If science were so reducible, we could, and should, have spent

the last 350+ years since Galileo fine-tuning our measurements of the speed of bodies falling freely towards the Earth. As a result, this feature might then be catalogued for different classes of objects according to Aristotle’s principle, seemingly quite reasonable, perfectly tangible yet utterly erroneous, that the speed with which objects fall freely towards the Earth is a function of their mass. This example hints at the solution to the conundrum. Once the principle of gravity as a force — something that cannot be directly seen, heard, smelt, touched or tasted — acting everywhere on the Earth was grasped, measuring and comparing the free fall of objects according to their mass had to be given up — because it was the attraction due to gravity that was the relevant common and decisive feature characteristic to all these freely-falling objects, not their individual masses. So, standards of measurements applied to phenomena and processes in Nature should cognize features that are characteristic to those phenomena and processes, not externally applied regardless of their appropriateness or inappropriateness. Instead of measuring the overall performance of a process or phenomenon under study or development according to criteria that are characteristic, however, statistical norms are frequently applied. These compare and benchmark performance relative to some standard that is held to be both absolute and external. Public concern about such standards — such as what constitutes a “safe level of background radiation” — has grown in recent years to the point where the very basis of what constitutes a standard has come into question. Our research group advanced the counter-notion of using units or standards that are “phenomenal” (as opposed to aphenomenal). For those who want a science of nature that can account for phenomena as they actually occur or appear in nature, standards whose constancy can only be assured outside the natural environment — under highly controlled laboratory conditions, for example, or “in a vacuum” — are in fact entirely arbitrary. Phenomenally-based standards, on the other hand, are natural in yet a deeper sense; they include the notion of a characteristic feature that may be cognized by the human observer. These are standards whose objectivity derives from the degree to which they are in conformity with nature. The objectivity of a natural standard cannot and must not be confounded with the vaunted neutrality of position of some external arbiter.

4.9.1 The Process of Standardization The most serious, most important, most significant, most true acid test of a proposed scientific characterization or analysis of any phenomenon, is that it accounts for everything necessary and sufficient to explain the phenomenon — its origin, its path and its end-point — thereby rendering it positively useful to human society. The same criterion was used in previous civilizations to distinguish between real and artificial. Khan and Islam (2007) introduced a criterion that identifies the end-point by extending time to infinity. This criterion avoids scrutiny of the intangible source of individual action (namely, intention). However, Zatzman and Islam (2007a) pointed out that the end-point at time t = infinity can be a criterion, but it will not disclose the pathway unless a continuous time function is introduced. Islam et al. (2010, 2016) used this concept and introduced the notion of knowledge dimension — a dimension that arises from introducing time as a continuous function.

Any number of examples could be cited from the commercial world of product advertising to further illustrate the nub of the problem; this chapter will introduce some of the more egregious cases to illustrate the trends being noted here. Which discipline(s) from the science of tangibles, for example, could model the following? “In every sense, a Whitestone Cheese is the embodiment of its environment. Pressed by hand, bathed by hand, turned by hand and packed by hand, it is a product of skill and mystery. Like original works of art, no two are alike. While their styles embrace a faint echo of Europe, Whitestone’s cheeses are unto themselves as unique as the land that created them” (Delicious Organics, 2007). We all know handmade cheese is better-tasting, and that mother’s milk is the best. But do we have a criterion that should lead us to expect these assumptions to be true or to be best? How about hand-drawn milk as compared to machine-drawn? How about un-Pasteurized® milk as compared to Pasteurized®? Do we even have a choice? We truly do not, since commercialization is done after engineering calculations are made from the science of tangibles. Then, the economics of tangibles are applied to provide the justification with a guarantee. The most important aspect of standardization is the deduction process that must be phenomenal before we begin the process of creating standard. For a deduction process to be phenomenal, both major and minor premises must be true. If any of these premises is not true, the process becomes aphenomenal, having no meaning whatsoever. Consider the following syllogism (from Zatzman and Islam, 2007): All Americans speak French [major premise] Jacques Chirac is an American [minor premise] Therefore, Jacques Chirac speaks French [conclusion-deduction] If, in either the major or minor premise, the information relayed above is derived from a scenario of what is merely probable (as distinct from what is actually known), the conclusion, which happens to be correct in this particular case, would be not only acceptable as something independently know-able, but reinforced as something also statistically likely. This, then finesses determining the truth or falsehood of any of the premises, and, eventually, someone is bound to “reason backwards” to deduce (in fact, this is called ‘induction’) the statistical likelihood of the premises from the conclusion! Indeed, this latter version, in which eventually all the premises are falsified as a result of starting out with a false assumption asserted as a conclusion, is exactly what has been identified and labeled elsewhere as the aphenomenal model (Khan and Islam, 2012). They used the term “aphenomenality” (in contrast to truth or falsehood) to describe in general the non-existence of any purported phenomenon or of any collection of properties, characteristics or features ascribed to such a purported but otherwise unverified or unverifiable phenomenon. If the first premise contradicts what is true in nature, the entire scientific investigation will be false. Such an investigation cannot possibly lead to a meaningful conclusion. The following syllogism shows the strength of deductive logic (the concept of “virtue” intended here is “that which holds positive value for an entire collectivity

of people”; not just for some individual or arbitrary subset of individual members of humanity): All virtues are desirable. Speaking the truth is a virtue. Therefore, speaking the truth is desirable. Even before it is uttered, a number of difficulties have already been built into this apparently non-controversial syllogism. When it is said, “all virtues are desirable”, there is no mention of a time factor (pathway) or intention (source of a virtue). For instance, speaking out against an act of aggression is a virtue, but is it desirable? A simple analysis would indicate that unless the time is increased to infinity (meaning something that is desirable in the long-run), practically all undesirable virtues, (even giving out in charity requires austerity in the shortterm, defending a nation requires self-sacrifice – an extremely undesirable phenomenon in the short-term). In the same way, if giving charity is a virtue, would that make giving away stolen goods a charity? Robin Hood may be an acceptable hero in the post-Renaissance culture, but is such a categorization scientifically grounded? Giving away stolen goods can be a virtue only if the history (time function) is obliterated. The third component is in the source of an act. For instance, is giving away with the intention of recovering something in the future a virtue? Is helping an oppressor a virtue? This logic shows the need for highlighting both the source (intention) and the pathway (time function going back to the origin) of an action in order to qualify it as a virtue. The scientifically correct reworking of this syllogism should be: All virtues (both intention and pathway being real) are desirable for time, t approaching ∞. Speaking the truth is a virtue at all times. Therefore, speaking the truth is desirable at all times. This brings us to the question what can prompt one to consider long term, as long as infinity in time? It is indeed the intention to conform to universal order, which embedded with the beginning and end of time, that can prompt one to look for such long-term approach. This is the essence of the approach of obliquity that can ensure inherent sustainability or true phenomena (Khan and Islam, 2012). This introduction of intangibles, such as intention (as source) and time (that governs the chronology of events) is relatively new (Islam et al., 2015).

4.9.2 Natural Ranking Process A natural ranking process starts with the proper definition of what’s natural. In Chapter 2, we have seen what constitutes the truth. For all the work on the paradigm of the truth (the mathematics of, the science of, etc.), one must establish: an actual, true source an actual, true science, pathway an actual, true end-point, or completion

Knowledge can be advanced even if the “true object” is not the entire truth. In fact, it is important to recognize the whole truth cannot be achieved. However, this should not be used as an excuse to eliminate any variable that might have some role but whose immediate impact is not “measurable”. All of these potential variables that might have a role should be listed right at the beginning of the scientific investigation. During the solution phase, this list should be discussed in order to make room for possibilities which at some point, one of the variables will play a greater role. This process is equivalent to developing the model that has no aphenomenal assumption attached to it. There is a significant difference between that which tangibly exists for the five senses in some finite portion of time and space, and that which exists in Nature independently of our perceptual functioning in some finite portion of time and space. Our limitation is that we are not able to observe or measure beyond what is tangible. However, the model that we are comparing this with should not suffer from these shortcomings. If we grasp the latter first, then the former can be located as a subset. However, errors will occur if we proceed from the opposite direction, according to the assumption that what is perceivable about a process or phenomenon in a given finite portion of time and space contains everything typical and/or characteristic of the natural environment surrounding and sustaining the process or phenomenon as observed in some given finite portion of time and space. Proceeding according to this latter pattern, for example, is the fact that mediaeval medical texts portrayed the human fetus as a “homunculus”, a miniaturized version of the adult person. Proceeding according to the former pattern, on the other hand, if we take phase [or “angle”] x as a complex variable, de Moivre’s Theorem can be used to readily generate expressions for cos nx and sin nx, whereas (by comparison) if we struggle with constructions of right triangles in the two-dimensional plane, it is a computationally intensive task just to derive cos 2x and sin 2x, and orders of magnitude more difficult to extend the procedure to derive cos nx and sin nx. In technology developments, it is important to take a holistic approach. The only single criterion that one can be used is the reality criterion. A reality is something that does not change with time going to infinity. This is the criterion that Khan and Islam (2007) have employed to define sustainability. If the ranking of a number of options is performed based on this criterion that would be equivalent to the real (phenomenal) ranking. This ranking is absolute and must be the basis for the comparison of various options. Table 4.11 shows the ranking of honey, sugar, saccharine, and Aspartame®. With the Reality index ranking system, the left column of Table 4.11 gives the true ranking of these three products. In technology developments, this natural (real) ranking is practically never used. Based on other ranking criteria, most of the rankings are reversed, meaning that the natural order is turned up-side down. However, there are some criteria that would give the same ranking as the natural one, but that does not mean that the criterion is legitimate. For instance, the heating value for honey is the highest. However, this does not mean the process is correct, or — putting it in terms of the syllogism that was presented in the previous section — it reaffirms that “all Americans do not speak French”, i.e., something we already knew all along. This table is discussed in Section 8 infra as a starting-point for establishing a “reality index”

that would allow a ranking according to how close the product is to being natural. Table 4.11 Synthesized and natural pathways of organic compounds as energy sources, ranked and compared according to selected criteria. Natural (real) ranking (“top” rank means most acceptable)

Aphenomenal ranking by the following criteria Bio-degradability Efficiency1, Profit margin e.g., η =

1. Honey 2. Sugar 3. Saccharine 4. Aspartame 1. Organic wood

2 3 4 1 1 Reverses depending on applic’n, e.g., durability

4 “sweetness /g” 3 2 1 4 Reverses if toxicity is considered

2. Chemically – treated wood 3. Chemically grown, Chemically treated wood 4. Genetically – altered wood 1. Solar

2

2. Gas

Heating value (cal/g) 1 2 3 4 4

3

4 3 2 1 4 Reverses if organic wood treated with organic chemicals 3

3

2

2

2

4

1

1

1

Not applicable

5



5 # Efficiency can-not be calculated for direct solar 4

3. Electrical 4. Electromagnetic 5. Nuclear 1. Clay or wood

1 Anti-

3 2 # 6 Reverses if

3 2 1 6

ash

3 bacterial soap won’t 5 global is

5 # – Heating value cannot be 4 calculated for direct solar 3 2 # 4 # 1 cannot be ranked 6

4

5

3

2. Olive oil +

use olive oil; volume needed for cleaning unit area 4 4 considered

4

5

wood ash 3.Veg oil+NaOH

5 6 2

3 2 1

3 2 #

3 2 1

1This efficiency is a local efficiency that deals with an arbitrarily set size of sample. *calorie/gms as a negative indicator—“weight watchers” (who are interested in minimizing calories) and is a positive indicator for energy drink makers (who are interested in maximizing calories).

In engineering calculations, the most commonly-used criterion is efficiency, which deals with output as a function of input. Ironically, an infinite efficiency would mean that someone has produced something out of nothing – an absurd concept as an engineering creation. However, if nature does that, it operates at 100% efficiency. For instance, every photon coming out of the sun gets used. So, for a plant the efficiency is limited (less than 100%) because it is incapable of absorbing every photon it is coming into contact with, but it would become 100% if every photon were to be accounted for. This is why maximizing efficiency as a man-made engineering practice is not a legitimate objective. If the concept of efficiency is used in terms of overall performance, the definition of efficiency has to be changed. With this new definition (called “global efficiency” by Khan Islam, 2012 and Chhetri and Islam, 2008), the efficiency calculations will be significantly different from conventional efficiency that only considers small objects of practical interest. As an example, consider an air conditioner running outdoors. The air in front of the air conditioner is indeed chilled, while air behind the device is heated. For instance, if cooling efficiency calculations are performed on an air conditioner running outdoors, the conventional calculations would show finite efficiency, albeit not 100%, as determined by measuring temperatures in front of the air conditioner and dividing the work by the work done to operate the air conditioner. Contrast this to the same efficiency calculation if temperatures all around are considered. The process will be proven to be utterly inefficient and will become obvious that the operation is not a cooling process at all. Clearly, the cooling efficiency of the process that is actually also creating heat is absurd. Consider now, with an air conditioner running on direct solar heating. An absorption cooling system means there is no moving parts and the solar heat is being converted into cool air. The solar heat is not the result of an engineered process. What would, then, be the efficiency of this system and how would this cooling efficiency compare with the previous one? Three aspects emerge from this discussion. First, global efficiency is the only one that can measure true merit of a process. Second, the only efficiency that one can use to compare various technological options is the global efficiency. Third, if one process involves natural options, it cannot be compared with a process that is totally “engineered”. For instance, efficiency in the latter example (as output/input) is infinity, considering no engineered energy has been imparted on the air conditioner.

No engineering design is complete until economic calculations are performed. Therein lies the need for maximizing profit margins. Indeed, the profit margin is the single-most important criterion used for developing a technology ever since the Renaissance that saw the emergence of short-term approach move at an unparalleled pace. As Table 4.11 indicates, natural rankings generally are reversed if the criterion of profit maximization is used. This affirms, once again, how modern economics have turned pro-nature techniques upside down.

4.9.3 A New Approach to Economic Analysis In this section, we sum up the whole chapter and discuss the potential practical applications of this new mindset in economics, whether it be data collection and analysis or policy development. For economists, this section would amount to transforming intangible costs/benefits into tangible ones. The above approaches to weeding out aphenomenal features and red herrings inherent in how research questions are posed and answered, have a number of rich applications. One is in the area of agricultural product characterization. As a first stab in the new direction that the approach discussed in this paper opens up, consider the following cases, involving the cultivation of a crop: Case A) Crop grown without any fertilizer. Land is cultivated with naturally available organic fertilizers (from flood, etc.) Consider the crop yield to be Y0. This is the baseline crop yield. Case B) Crop grown with organic fertilizer (e.g. cow dung in Asia, guano in Americas). Consider the crop yield to be Y1. Case C) Crop grown with organic fertilizer and natural pesticide (e.g. plant extract, limestone powder, certain soil). Consider the crop yield to be Y2. Case D) Crop grown with chemical fertilizer (the ones introduced during “green revolution”). Consider the crop yield to be Y3. Case E) Crop grown with chemical fertilizer and chemical pesticide. Consider the crop yield to be Y4. Case F) Crop grown with chemical fertilizer, chemical pesticide, and genetically modified seeds. Consider the crop yield to be Y5. Case G) Crop grown with chemical fertilizer, genetically modified seeds and genetically modified pesticide. Consider the crop yield to be Y5. It is well known that for a given time, Y5 > Y4 > Y3 > Y2 > Y1 > Y0. If the profit margin is used as a criterion, practices that give the most crop yield would be preferred. Of course at a time (t = “right now”), this is equivalent to promoting “crops are crops” Aside from any considerations of product quality, which might suffer great setback at a time other than ‘t = “right now”, their higher yield directly relates to higher profit. Historically, a portion of the marketing budget is allocated to obscure the real quality of a product in order to linearize the relationship between yields and profit margins. The role of advertisement in this is to alter

peoples’ perception, which is really a euphemism for forcing people to exclusively focus on the short-term. In this technology development, if natural rankings are used, Cases D through G would be considered to be progressively worse in terms of sustainability. If this is the ranking, how then can one proceed with that characterization of a crop that must have some sort of quantification attached to it? For this, a sustainability index is introduced in the form of a Dirac d function, d(s), such that: d(s) = 1, if the technology is sustainable; and d(s) = –1, if the technology is not sustainable. Here, sustainability criterion of Khan (2007) is used. A process is aphenomenal if it doesn’t meet the sustainability criterion and it assumes a d value of -1. Therefore, the adjustment we propose in revising the crop yield is as follows: (4.1) Here Y stands for the actual crop yield, something recorded at present time. Note that Yreal has a meaning only if future considerations are made. This inclusion of the reality index forces decision makers to include long-term considerations. The contribution of a new technique is evaluated through the parameter that quantifies quality as, Qreal (stands for real quantity), given as: (4.2) For unsustainable techniques, the actual quantity, Y will always be smaller than Y0. The higher the apparent crop yield for this case, the more diminished the actual quality. In addition to this, there might be added quality degradation that is a function of time. Because an unsustainable technology continues to play havoc on nature for many years to come, it is reasonable to levy this cost when calculations are made. This is done through the function, L (t). If the technique is not sustainable, the quality of product will continue to decline as a function of time. Because quality should be reflected in pricing, this technique provides a basis for a positive correlation between price and quality. This is a sought-after goal that has not yet been realized in the postindustrial revolution era (Zatzman and Islam, 2007b). At present, price vs. quality has a negative slope, at least during the early phase of a new technology. Also, the profit margin is always inversely proportional to the product quality. Nuclear energy may be the cheapest, but the profit margin of the nuclear energy is the highest. Herbal medicines might be the only ones that truly emulate nature, which has all the solutions, but the profit margins are the lowest in herbal medicines. Today, organic honey (say from the natural forest) is over 10 times more expensive than farm honey when it is sold in the stores. However, people living close to natural habitats do have access to natural honey free of cost, but the profit margin in farm honey is still the highest. In fact, pasteurized honey from Australia is still one expensive locally available unadulterated honeys (from a local source, but not fully organic) in the Middle East. The aim of this approach is to establish in stepwise manner a new criterion that can be used to rank product quality, depending on how real (natural) the source and the pathways are. This

will distinguish between organic flower honey and chemical flower honey, use of antibiotics on bees, electromagnetic zones, farming practices, sugar for bees, as well as numerous intangibles. This model can be used to characterize any food product that makes the value real. In this context, the notion of mass balance needs to be rethought, so that infinite dimensions (using t as a continuous function) can be handled. What we have to establish is the dynamism of the mass-energy-momentum balance at all scales, and the necessity for non-linear methods of computing just where the balance is headed at any arbitrarily-chosen point. Non-linear needs to be taken and understood to mean that there is no absolute boundary. There is only the relative limit between one state of computation and other computational states. Differences between states of computation are not necessarily isomorphic (in 1:1 correspondence) with actual differences between states of nature. Knowledge gathered about the former is only valuable as one of a number of tools for determining more comprehensively what is actually going on with the latter. 1 A solution of local anesthetic (freezing) and often a narcotic as well as is then given through this catheter inserted near the spinal cord. Also, added infusion of oxytocin or other synthetic hormone to induce early delivery. 2 We’ll see in subsequent chapters that in scientifically correct pricing, there should be a penalty in turning natural (real) to artificial. With the current technology development mode, turning from natural to artificial actually increases the profit margin – so much so that a scientific pricing would create a paradigm shift in economic developments. 3 Botulism is a rare paralytic illness caused by a toxin which is very poisonous to humans. As late as August 3, 2013 headline reads “New Zealand recalls dairy products over botulism fears”. 4 At this time no university in the United States, and indeed very few outside Germany, supported advanced studies in organic chemistry. The field itself began with the accidental discovery in a German textile company’s laboratory in the late 1850s of how to synthesize aniline blue.

Chapter 5 Comprehensive Analysis of Energy Sustainability 5.1 Introduction We have seen true economics when it involves real value of an asset and movement of money is through natural process. We have also seen deconstruction of existing theories and rules that make them irrelevant to the real economics. In Chapter 4, we have demonstrated that the mindset of technology is not amenable to sustainability analysis. In this chapter, we present what need to be done in order to analyze an energy development project. The term ‘sustainability’ is one of the most abused terms in the field of energy management. There are numerous definitions of the word, yet few have any scientific significance (Khan and Islam, 2007; Islam et al., 2012). Numerous vague phrases such as “inherent characteristic of healthy social and environmental systems”, “maintaining or enhancing various system capacities … so that the system can withstand external shocks and return to normal functioning”, have surfaced all over the place without qualifying terms such as ‘healthy’, ‘capacity’, ‘normal’. It is also stated that sustainable development involves management of resources that is commensurate with long-term wealth creation and the maintenance of capital. The monetization of sustainability is not subtle. This notion has fuelled the misconception that resource management must have financial gains as its primary motivation. From that point on, a myopic approach is taken without exception. With that starting point, terms, such as dynamic, integrated, etc. are introduced without paying regard to the false start embedded in the original definition. This definition of sustainability also suffers from a prejudice against petroleum resource development. In general, there has been a perception that solar, wind and other forms of ‘renewable’ energy are more sustainable or less harmful to the environment than its petroleum counterpart. It is stated that renewable energy is energy that is collected from renewable resources, which are naturally replenished on a human timescale, such as sunlight, wind, rain, tides, waves, and geothermal heat. Chhetri and Islam (2008) have demonstrated that the claim of harmless-ness and absolute sustainability is not only exaggerated, it is not supported by science. However, irrespective of scientific research, this positive perception translated into global public support. One such survey was performed by Ipsos Global in 2011 that found very favourable rating for non-fossil fuel energy sources (Figure 5.1). Perception does have economic implications attached to it. The Ipsos study found 75% agreeing to the slogan “scientific research makes a direct contribution to economic growth in the UK”. However, in the workshops, although participants agreed with this, they did not always understand the mechanisms through which science affects economic growth. There is strong support for the public funding of scientific research, with three-quarters (76%) agreeing that “even if it brings no immediate benefits, research which advances knowledge should be funded by the Government”. Very few (15%) think that “Government funding for science should be cut because the money can be better spent elsewhere”. This is in spite of public support for cutting

Government spending overall. It is not any different in the USA, for which perception translates directly into pressure on the legislative body, resulting in improved subsidy for certain activities.

Figure 5.1 Public perception toward energy sources (Ipsos, 2011). The Energy Outlook considers a range of alternative scenarios to explore different aspects of the energy transition (Figure 5.2). The scenarios have some common features, such as a significant increase in energy demand and a shift towards a lower carbon fuel mix, but differ in terms of particular policy or technology assumptions. In Figure 5.2, Evolving Transition (ET) scenario is a direct function of public perception that dictates government policies, technology and social preferences. Some scenarios focus on particular policies that affect specific fuels or technologies, e.g. a ban on sales of internal combustion engine (ICE) cars, a greater policy push to renewable energy, or weaker policy support for a switch from coal to gas considered, e.g. faster and even faster transitions.

Figure 5.2 Energy outlook for 2040 as compared to 2016 under various scenarios (*Renewables includes wind, solar, geothermal, biomass, and biofuels, from BP Report, 2018).

In this chapter, the concept of true energy sustainability is presented and analyzed. This is done by discussing the salient features of modern technologies and economic models. At the end of the chapter, a comprehensive picture of energy sustainability – both from an environmental and economic (i.e., short-term/long-term wealth) perceptive – emerges.

5.2 Sustainability in the Information Age and Environmental Insult In this era of globalization, technology is changing every day. Due to the continuous changes and competition between organizations, it is becoming increasingly difficult to sort out true information from disinformation. In the field of management, a ‘sustainable organization’ can be defined as an organization where the following features are present: i) political and security drivers and constraints, ii) social, cultural and stakeholder drivers and constraints, iii) economic and financial drivers and constraints, and iv) ecological drivers and constraints. Thus, the concept of sustainability is the vehicle for the near future Research & Development (R&D) for technology development. In today’s society, there is no model for sustainability as none of the current technology development is fully sustainable. In fact, Khan and Islam (2016) argued that the sustainability scenario is worsening and notwithstanding the propaganda, every new solution has made the environment more vulnerable to insult. In this regard, a paradigm shift occurs if nature is taken as a model. After all, nature is 100% zero-waste (Khan and Islam, 2016); that is, in nature, all functions or techniques are inherently sustainable, efficient and functional for an unlimited time period (this can be expressed as: ∆t→∞). By following the same path as the function inherent in nature, our research shows how to develop truly sustainable technology (Islam et al., 2012; Chhetri and Islam, 2008; Khan and Islam, 2016). Khan and Islam (2007) introduced a new approach in technology evaluation based on the novel sustainability criterion. In their study, they not only considered the environmental, economic and regulatory criteria but also investigated sustainability of technologies. “Sustainability” or “sustainable technology” has been used in many publications, company brochures, research reports and government documents, which do not give a clear direction (Appleton, 2006; Khan and Islam, 2007; Hossain and AlMajed, 2015). Contrary to the true sustainability model of technological development, ‘corporatization’ – which is discussed in Chapter 3 – stands as its foremost enemy in the current world. The first target of corporatization has been humans. Human thought material (HTM) has been disconnected from conscience and conscientious thought process. After marginalizing humans, the next target is water – the most ubiquitous matter, which is also the essence of life and vitality. This water is turned metaphorically into Coke – an agent that reverses the vitality of water into morbidity. A similar scheme continues with air, the most ubiquitous gas, to cigarette smoke and exhaust, and dirt, the most ubiquitous solid to nanomaterial, all to the benefit of their purse and the detriment of the environment and society at large. As an alternative to fossil fuel that produces CO2 – the essence of greenery – into ‘solar electricity’ that guzzles SiO2 toxins in exchange of CO2. Overall impact of this mode of economic extremism is felt by the

environment. Following are the various sectors affected.

5.2.1 Agriculture and Development The most affected aspects of the corporatization of technological development are the following: biodiversity loss, global warming, and water availability. The causes of such disastrous results are often overlooked. For instance, the role of chemical fertilizer and pesticide is rarely linked with loss in biodiversity. Instead, the corporatization camp resorts to blaming over cultivation and deforestation (Chhetri and Islam, 2008). Similarly, global warming is not deemed connected to refining and other processes that add numerous artificial chemicals to the otherwise sustainable product (Islam et al., 2012). Water availability is rarely linked to chlorination, pollution from factory farms, industrial plants, and activities such as fracking that uses artificial chemicals (Islam, 2014).

5.2.2 Desertification Desertification is a huge problem. Drought is considered to be one of the largest causes of famine and starvation all over Africa and South America. Because of changes in climate, urbanization, deforestation and pollution, everyday thousands of acres of arable land are disappearing. In the mean time, less than 20% of many countries “arable” land is being used. While this topic is well talked about, most evade any tangible solution to the problem (Islam, 2015).

5.2.3 Ecosystem Change Any interference of unsustainable technology with the ecosystem will render irreversible change in the ecosystem (Khan and Islam, 2016). The only true model of sustainability to be found is in Nature. A truly sustainable process conforms to connected and/or underlying natural phenomena, in source and pathway. Scientifically, this means that true long-term considerations of humans should include the entire ecosystem.1 Some have called this inclusion ‘the humanization of the environment’ and put this phenomenon as a precondition of true sustainability (Zatzman and Islam, 2007). The inclusion of the entire ecosystem is meaningful when the natural pathway for every component of the technology is followed. Only such design can assure both short-term (tangible) and long-term (intangible) benefits.

5.2.4 Fisheries Environmental degradation due to unsustainable practices in fisheries serves as a great reminder as to how a seemingly infinite resource can be turned into a calamity. It is not an understatement to say that, for the first 470 years, the harvesting of these resources posed little or no threat either to the marine environment nor to the present or future prospects of the coastal communities most involved in this activity. However, in the last 30 years of that halfmillennium, what remained was literally raped from stem to stern at unprecedented speed (Zatzman, 2013). The historical exegesis brings out in striking manner how far out of touch both the promoters of this fishery and its critics actually were with regard to the conduct of this

fishery in modern economic conditions of vertically-integrated resource extraction. None of them manifested the slightest awareness of how this fishery could have averted the dramatic collapse that eventually destroyed the livelihood of the families of more than 40,000 commercial fishermen from the Canadian provinces of Newfoundland and Labrador, Quebec and Nova Scotia after 1992. This dialogue of the deaf was manifest not only in the late 1970s — as the struggle over the northwest Atlantic fisheries’ future heated up to become one of the sideshows of the global confrontation between the U.S. and Soviet superpowers over control of the world’s oceanic spaces. The same thinking that failed to address the problems of that time was being repeated 30 years later by some of the most vociferous critics of the antics of the trawling fleets, Canadian and foreign, back in the 1970s. In 1992 the Canadian government declared a moratorium on the fishing of a variety of fish in the east coast.

5.2.5 Deforestation The world’s forestry plays a crucial role in maintaining environmental balance. They provide renewable and sustainable raw materials (including medicine) and energy, maintain biological diversity, mitigate climate change, protect land and water resources, provide recreation facilities, improve air quality, and help alleviate poverty. Each of these manmade activities in the post-renaissance period affect the forest in an irreversible manner. In view of competing interests in the benefits of forest resources and forest land, the Food and Agriculture Organization of the United Nations has carried out global forest resources assessments at five to ten year intervals since 1946. The most recent and most extensive assessment was completed in 2005 and aimed at measuring progress towards sustainable forest management. The assessment focused on six themes representing important elements of forest management: Extent of forest resources Biological diversity Forest health and vitality Productive functions of forest resources Protective functions of forest resources Socio-economic functions Information was collected from 229 countries and territories for three points in time: 1990, 2000, and 2005. None of these reports yielded any scientifically sound solution to the problem of deforestation (Khan and Islam, 2007). Claims have been made that by adopting the concept of sustainable forest management as a reporting framework, it is possible to provide a holistic perspective on global forest resources, their management, and their uses. However, in absence of a truly sustainable technology development scheme, these are but hollow claims. The current scenario is dismal. Despite their immense value, nearly half of the world’s forests have been lost. What’s worse is that we are cutting them down at greater rates each year to plant crops, graze cattle and generate income from timber and other forest products. This deforestation immediately affects the climate, which climate change scenarios showing that 11% of unnatural

climate change is evoked by deforestation. This amount is similar to the amount of CO2 emission from all cars and trucks on Earth combined. Then there is an estimated 50% of tropical rainforest protected areas that are ‘empty’. These “empty forests” contain trees but few animals as a result of overexploitation and uncontrolled hunting. As a result, animal species are in danger of extinction, tree species lose important seed dispersal, and local people lose an important supply of protein. Overall, this translates into extraordinary imbalance for the ecosystem.

5.2.6 Marine Litter Marine littering is synonymous with the plastic culture that is just over 100 years old. Just like how a miniscule amount of Freon discharge in the environment can trigger a gaping hole in the ozone layer, the disposal of plastic can invoke marine disasters. The global production of plastics is increasing every year ever since the plastic revolution began (Figure 5.3). This plastic finds its way into the environment and into the oceans. Because plastic does not degrade or assimilate into the environment, the plastic content in the ocean increases proportionately. This increase can trigger many instances of imbalance, including CO2 not being absorbed by the ocean.

Figure 5.3 World plastic production (From Statista, 2018). About 30 percent of the carbon dioxide that people have put into the atmosphere has diffused into the ocean through the direct chemical exchange. In the absence of absorption by plants, dissolving carbon dioxide in the ocean creates carbonic acid, which increases the acidity of the water. Since 1750, the pH of the ocean’s surface has dropped by Europe World0.1, a 30 percent change in acidity. Such a change is drastic with numerous consequences. The widespread occurrence of large plastic fragments in the sea and the direct impact this can have both on marine fauna and on legitimate uses of the environment has been welldocumented. In recent years the existence of smaller plastic particles referred to as microplastics and their potential impact has received increasing attention. This concerns particles smaller than 5 mm, and there is increasing evidence that such particles can be ingested by marine organisms, leading to large-scale harm on the lowest levels of the marine food chain

(Zatzman, 2012).

5.2.7 Water Resources Water is undoubtedly the most vital resource for sustenance of life. Throughout history, water has been revered as such. According to Bertrand Russell, “Western philosophy” begins with Thales. Thales’ most famous philosophical position was his cosmological thesis, which comes down to us through a passage from Aristotle’s Metaphysics. In the work, Aristotle unequivocally reported Thales’ hypothesis about the nature of matter – that the originating principle of nature was a single material substance: water. Islam et al. (2014) cited Qur’anic verses (11:7) that confirm water to be the original creation: “And it is He who created the heavens and the earth in six days – and His Throne had been upon water – that He might test you as to which of you is best in deed. But if you say, ‘Indeed, you are resurrected after death,’ those who disbelieve will surely say, ‘This is not but obvious magic.’” Just like universe began with water, life also emerged from life. Water makes life as we know it possible. Every drop cycles continuously through air, land, and sea, to be used by someone (or something) else “downstream.” Water covers 70% of Earth’s surface, but only 3% is fresh, and only a fraction of one percent supports all life on land. It is considered that one percent of the planet’s total water resources can be classified as accessible freshwater resources. As such, water has been the most important target for molestation of the ecosystem. Climate change and growing populations are increasing the pressures on that reserve. Figure 5.4 shows the per capita consumption of water in various countries. Although the discrepancy between the US and other countries is alarming, the most important point is the fact that a large amount of water is withdrawn from freshwater sources and is rendered unsustainable by polluting it with synthetic chemicals (chlorination, fertilization, pesticide applications) never to be returned to their natural state. For instance, in 2010, total irrigation withdrawals were 115,000 Mgal/d, which accounted for 38 percent of total freshwater withdrawals and 61 percent of total freshwater withdrawals for all categories excluding thermoelectric power. Total irrigation withdrawals were reported to be 9 percent less than what it was in 2005. Withdrawals from surface-water sources were 65,900 Mgal/d, which accounted for 57 percent of the total irrigation withdrawals, and were almost 12 percent less than in 2005. Groundwater withdrawals for 2010 were 49,500 Mgal/d, or 6 percent less than in 2005. About 62,400 thousand acres were irrigated in 2010, an increase of about 950 thousand acres (1.5 percent) from 2005. The national average application rate for 2010 was 2.07 acre-feet per acre, or 11 percent less than the 2005 average of 2.32 acre-feet per acre. Figure 5.5 shows the distribution. Khan and Islam’s (2008) criterion can be used to determine that these resources are irreversibly polluted.

Figure 5.4 Annual per capita water consumption in metric ton in 2013 (from Statista, 2018a).

Figure 5.5 (from USGS, 2017). Consequently, although water is virtually abundant, as much as two-thirds of the global population may live in regions with limited access to freshwater resources by 2050, as the world’s population is predicted to grow to reach 11.2 billion people by 2100. While population levels are expected to increase at the fastest rate in emerging regions, water shortages will also be felt in industrialized countries – including the United States – where droughts and other weather-related catastrophes are set to become more frequent over the coming decades.

By 2050, industrial demand for water is expected to put enormous pressure on freshwater accessibility, thus shortening the amount of clean water available for agricultural and domestic uses. Since water is becoming increasingly scarce, the amount of water that is currently consumed per person in countries such as the United States can no longer be deemed acceptable.

5.2.8 Petroleum Resources Even though it has been fashionable in the post-9/11 era to talk about ‘removing the oil addiction’ – a term first popularized by US President G. W. Bush (Islam et al., 2012) and glamourized as recently as 2018 by Saudi Crown Prince Mohammed Bin Salman (Hutt, 2016) – petroleum continues to be the driver of modern economy. The same petroleum, however, has become the driver of economic extremism that we discussed in Chapter 3. Figure 5.6 shows countries by their dependence on exports of fuel commodities, which include natural gas and coal, as well as oil and oil products. Countries where fuel accounts for more than 90% of total exports include Algeria, Azerbaijan, Brunei Darussalam, Iraq, Kuwait, Libya, Sudan and Venezuela.

Figure 5.6 Oil dependence of various countries (From Hutt, 2016). Table 5.1 shows the Legatum prosperity index ranking of the countries, shown in Figure 5.6, as compared to their oil dependence ranking. The Legatum Prosperity Index is an annual ranking developed by the Legatum Institute, a division of the private investment firm Legatum. The ranking is based on a variety of factors including wealth, economic growth, education, health, personal well-being, and quality of life. We found this index to be reflective of sustainability criteria as proposed by Khan and Islam (2007), although this index is not wholly comprehensive. The 2017 Legatum Prosperity Index is based on 104 different variables analysed across 149 nations around the world. Source data includes Gallup World Poll, World Development Indicators, International Telecommunication Union, Fragile States Index, Worldwide Governance Indicators, Freedom House, World Health Organisation, World Values Survey, Amnesty International, Centre for Systemic Peace. The 104 variables are grouped into

9 sub-indexes, which are averaged using equal weights. The 9 sub-indexes are: 1. Economic Quality 2. Business Environment 3. Governance 4. Education 5. Health 6. Safety & Security 7. Personal Freedom 8. Social Capital 9. Natural Environment

Table 5.1 Ranking of various countries on oil dependence and Leagum prosperity index. Country Iraq Libya Venezuela

Oil dependence ranking 1 2 3

Ranking in oil production 4 30 11

LEAGUM prosperity index 142 136 132

Algeria Brunei Kuwait

4 5 6

18 41 9

116

Azerbaijan Sudan Qatar Nigeria Saudi Arabia Oman Kazakhstan Russia Iran Colombia Norway UAE Bahrain Bolivia Ecuador Ghana Indonesia Canada Malaysia

7 8 9 10 11

23 35 17 13 2

106 147 47 128 78

12 13 14 15 16 17 18 19 20 21 22 23 24 25

19 16 1 5 21 15 8 51 48 26 42 22 7 25

73 72 101 117 66 1 39 62 76 71 86 59 8 42

80

The spider chart in Figure 5.7 is presented to show how the index was calculated for Saudi Arabia. In this chart, data points that appear further away from the center represent a better performance to the points that are closer to the center. Note how Saudi Arabia, which ranks at 78th on the prosperity scale has modest marks for economics, health, social capital, and even education, whereas the overall ranking is poor – all because of governance and personal

freedom issues (Figure 5.8). Note also how Saudi Arabia is ranked highly on both oil dependence and oil production. The question arises as to how to couple economics with overall prosperity. The problem is further highlighted with the example of Norway – a country that is ranked highly in oil dependence (17), oil production (15) and prosperity (no. 1). Figure 5.9 shows the spider diagram for Norway and Figure 5.10 shows various aspects of that ranking. Norway’s ranking in the top spot of prosperity ranking is only matched by the top ranking in natural environment. While Norway is ranked 8th for business environmental quality, business environment, and personal freedom, the overall ranking comes out to be no. 1. These two countries that are on opposite sides of the ‘civilization spectrum’ offer an insight into how perception plays a big role in classifying nations. One thing that is certain is that oil dependence is particularly toxic for developing countries (Table 5.1) that show the maximum discrepancy between prosperity ranking and oil production ranking. In this analysis, Russia plays a peculiar role. It shows that despite its political and military prowess, Russia has failed to create a homogenous society that would rank highly in the prosperity index. In keeping with this, the gap between the highest and lowest scores in the Index has increased for five straight years and the spread between nations is growing, indicating that while prosperity as a whole may be increasing for the world, not all countries are yet benefiting from the increase.

Figure 5.7 Spider Chart of Saudi Arabia (When comparing multiple countries on a spider).

Figure 5.8 Breakdown of Saudi Arabia’s prosperity index.

Figure 5.9 Norway’s prosperity index in spider chart form.

Figure 5.10 Breakdown of Norway’s prosperity index. Another measure of oil dependence is made through observing GDP shares. For economies that rely most heavily on oil, this chart using 2012 World Bank data shows oil revenue as a share of GDP. Saudi Arabia comes third, after Kuwait and Libya, with roughly 45% GDP depending on oil. Figure 5.11 shows which countries around the world are most reliant on oil both as an export and as a share of GDP. This chart shows countries by their dependence on exports of fuel commodities, which include natural gas and coal, as well as oil and oil products. Saudi Arabia is ranked 11th. Countries where fuel accounts for more than 90% of total exports include Algeria, Azerbaijan, Brunei Darussalam, Iraq, Kuwait, Libya, Sudan and Venezuela. For all practical purposes, the GDP correlates well with petroleum as a primary energy (Figure 5.12).

Figure 5.11 Oil dependence in terms of GDP share and historical oil prices (World Bank, 2017).

Figure 5.12 Trends in GDP and Energy intensity. Only recently (Hutt, 2016), Saudi Arabia has said that it wants to end its “addiction to oil”

with far-reaching reforms, and is now restructuring government departments to drive through plans for a post-petroleum era. This shake-up has come on the back of the steep and sustained drop in oil prices – from a peak of $115 per barrel in June 2014 to under $35 at the end of February 2016 – and marks a massive change of direction for the world’s largest petroleum exporter, also the de facto leader of OPEC (Organization of Petroleum Exporting Countries). However, the reform plan, first announced in April, 2016, by then Deputy Crown Prince Mohammed bin Salman, includes creating the world’s largest sovereign wealth fund, privatizing the state-owned oil company Saudi Aramco, cutting energy subsidies, expanding investment, and improving government efficiency. It turns out little has changed up until 2018, when the talk of curing oil addiction has subsided, yielding to other causes of economic failures. What we see in this section is that petroleum resources have become a liability to the developing countries and not so for developed countries. This reversal of fortune can be attributed to the fact that developing countries only export crude oil whereas they are entirely dependent on much value-added products that are derived from the petroleum resources. As such, petroleum resources have become an example of unsustainable management practices for the developing countries.

5.3 Climate Change Hysteria Even though petroleum continues to be the world’s most diverse, efficient, and abundant energy source, due to “grim climate concerns”, global initiatives are pointing toward a “go green” mantra. When it comes to defining ‘green’, numerous schemes are being presented as ‘green’ even though all it means is the source of energy is not carbon. In fact the ‘left’, often emboldened with ‘scientific evidence’, blames Carbon for everything, forgetting that carbon is the most essential component of plants. The ‘right’, on the other hand, deny climate change altogether, stating that it is all part of the natural cycle and there is nothing unusual about the current surge in CO2 in the atmosphere. Both sides ignore the real science behind the process. The left refuses to recognize the fact that artificial chemicals added during the refining process make the petroleum inherently toxic and in absence of these chemicals petroleum itself is 100% sustainable. The right, on the hand, does not recognize the science of artificial chemicals that are inherently toxic and does not see the need for any change in the modus operandi. More importantly, both sides see no need for a change in fundamental economic outlook. Energy management has been a political issue rather than an economic strategy. For that, the USA has played a significant role. The establishment of the Department of Energy brought most Federal energy activities under one umbrella and provided the framework for a comprehensive and balanced national energy plan. The Department undertook responsibility for long-term, high-risk research and development of energy technology, Federal power marketing, energy conservation, the nuclear weapons program, energy regulatory programs, and a central energy data collection and analysis program. Recently, US President Donald Trump showed his desire to exit the Paris Accord. Epstein

(2017) points out that there are at least two principled ways to defend Trump’s decision to exit the Paris accord. The first is the weak scientific case that links global warming and other planetary maladies to increases in carbon dioxide levels. There are simply too many other forces that can account for shifts in temperature and the various environmental calamities that befall the world. Second, the economic impulses underlying the Paris Accords entail a massive financial commitment, including huge government subsidies for wind and solar energy, which have yet to prove themselves viable. In his speeches, President Trump did not state these two points, nor did he challenge his opponents to explain how the recent greening of the planet, for example, could possibly presage the grim future of rising seas and expanded deserts routinely foretold by climate activists (Yirka, 2017). In absence of such an approach, the general perception of the public has been that President Trump wants to simply bully the rest of the world, prompting critiques use vulgar languages to depict him as a classic bully2. However, it is curious that the endless criticisms of the President all start from the bogus assumption that a well-nigh universal consensus has settled on the science of global warming. To refute that fundamental assumption, it is essential to look at the individual critiques raised by prominent scientists and to respond to them point by point, so that a genuine dialogue can begin. More importantly, no scientist has pointed the figure to processing and refining as the root cause of global warming and certainly none has even contemplated pointing fingers at socalled renewable energy solutions that are more toxic to the environment than petroleum systems. Instead of asking for a logical answer, the President has disarmed his allies. For instance, through U.N. Ambassador Nikki Haley, he has all but conceded that climate change is “real”. Instead of starting with the social case against the substantive provisions of the Paris Accords, Trump justified his decision by invoking his highly nationalistic view of international arrangements. He said the United States was once again getting ripped off by a lousy treaty that, in his words, would force American “taxpayers to absorb the cost in terms of lost jobs, lower wages, shuttered factories, and vastly diminished economic production.” He then insisted that his first duty is to the citizens of Pittsburgh, not of Paris—giving the impression that there are only provincial arguments that support his decision. In this process, the debate becomes a choice between US hegemony and holistic approach, as if USA is on a collision course with true sustainability. Yet, ironically, the President has a stronger case on this point than he does with his attacks on free trade, which he justified in similar terms. Free trade has a natural corrective, in that no private firm will enter into any agreement that it believes will work to its disadvantage. That was decidedly not true of the Obama approach to the Paris Accords, which gives a free pass to China until 2030 even though its recent carbon emissions have increased by 1.1 billion tons, while the United States’ total has dropped by 270 million tons, and will continue to do so. But when it comes to the United States, the critics claim that the threat of greenhouse gases (GHGs) has never been greater, while saying that China may eventually implement greater GHG controls than required by its current commitment. The Chinese can reduce emissions a lot more rapidly than the US. The diplomatic pass represents a clear double standard. There is a general recognition in US that undeveloped countries have already benefited vastly from Western technology, including carbon-based energy, and market institutions that, as the

Cato Institute’s Johan Norberg (2017) reminds us in his book, have done so much to ameliorate chronic starvation and poverty across the globe. Missing from this analysis is the scientific explanation of how every dollar received by the developed countries actually end up working against that country and contribute to their continued dependence on the west (Zatzman and Islam, 2007). Contrary to all popular arguments, Carbon dioxide that has caused havoc to the atmosphere is not something that can be ‘cured’ with Green Climate fund and all solutions that are proposed to remedy the environmental insult are actually more toxic to the environment than the original offense (Islam et al., 2012). and the political risk of the Green Climate Fund lies in its false characterization of advanced Western nations as despoilers of less developed countries. The economic analysis that is often ‘sold’ as the only solution by the scientific community is also misleading. These studies show dramatic declines in jobs and production—that will result in astonishing economic losses for the United States—if the policies embodied in the Paris Accords are fully implemented. These numbers are simply too large to be credible, given the adaptive capacity of the American industrial sector. Contrary to what Trump says, U.S. production will not see “paper down 12 percent; cement down 23 percent; iron and steel down 38 percent; coal … down 86 percent; natural gas down 31 percent.” As the Wall Street Journal (WSJ, 2017) has noted, the level of carbon efficiency in the United States has improved vastly in the last decade because of innovations that predate the Paris Accords. That trend will continue. Traditional forms of pollution generate two forms of loss, which are addressed by current laws. First, nothing about the Trump decision exempts domestic U.S. polluters from federal and state environmental laws and lawsuits that target their behavior. It is precisely because these laws are enforced that coal, especially dirty coal, has lost ground to other energy sources. Second, pollution is itself inefficient, for it means that the offending firms have not effectively utilized their production inputs. These two drivers toward cleaner air and water—one external, one internal—explain why American technological innovation will continue unabated after Paris as long as true sustainability is understood and implemented. If such actions of Trump were aimed at gaining praise from his detractors, it has not worked, and the lines that the U.S. “will continue to be the cleanest and most environmentally friendly country on Earth” have fallen on deaf ears as his critiques continue to vilify him. As pointed out by Epstein (2017), one comical irony about the current debate is that the New York Times seems to have conveniently forgotten that carbon dioxide is colorless, odorless, and tasteless. Why else would it print two pictures—one of a dirty German power plant and the other of a dirty Mongolian steel plant—to explain why other “defiant” nations will not follow the U.S. now that it has withdrawn from Paris. It is likely that the New York Times would find far fewer plants in the U.S. that dirty. Indeed, one tragedy of Paris is that the nations adhering to it will invest more in controlling GHGs than in controlling more harmful forms of pollution that developed nations have inflicted on themselves. One of the advantages of getting out of Paris is that it removes any systematic pressure for American firms to “hop on the wind and solar bandwagons”. Those firms that urged Trump to subsidize this market are free to enter it themselves, without dragooning skeptical firms and

investors into the fold. During the entire Obama era, these companies have received subsidies and much research support while conducting no research to even study the true sustainability of these schemes. Chhetri and Islam’s (2008) analysis shows that none of them are sustainable and are far more toxic to the overall environmental integrity. Withdrawal also cuts down on the risk that environmental lawyers turn the Paris Accords into a source of domestic obligations even though it supposedly creates no international obligations. It is easily provable that withdrawal from the treaty will do nothing to hurt the environment, and may do something to help it. With or without the hysteria, the earth has been through far more violent shocks than any promised by changes in carbon dioxide levels. This is not to say that petroleum production is inherently toxic or that it cannot be rendered sustainable. Islam et al. (2012) have shown rendering petroleum sustainable is much easier than rendering wind or solar energy sustainable. It is important to keep priorities straight when the U.S. and other nations around the world face major challenges on matters of economic prosperity and international security. Withdrawing from the Paris accord will allow the United States to focus its attention on more pressing matters, such as finding real solutions to sustainability problems.

5.4 The Energy Crisis Ever since the oil embargo of 1972, the world has been gripped with the fear of ‘energy crisis’. U.S. President Jimmy Carter, in 1978, told the world in a televised speech that the world was in fact running out of oil at a rapid pace – a popular Peak Oil theory of the time – and that the US had to wean itself off of the commodity. Since the day of that speech, worldwide oil output has actually increased by more than 30%, and known available reserves are higher than they were at that time. This hysteria has survived the era of Reaganomics, President Clinton’s cold war dividend, President G.W. Bush’s post-9–11 era of ‘fearing everything but petroleum’ and today even the most ardent supporters of petroleum industry have been convinced that there is an energy crisis looming and it is only a matter of time that we will be forced to switch a no-petroleum energy source. Then comes the peddler of ‘renewable energy’ and amass all resources from the public with the promise of offering ‘clean energy’. The crisis was scientifically fomented through the advancement of so-called Peak Oil theory that became the driver of many other theories with great impact on economic polices. Peak oil is one of the concept that promotes the notion that global oil reserve is limited and at some point will start to run out, leading to sharp rise in oil price (Speight and Islam, 2015). This theory paints two possible pictures of the energy outlook, namely, 1) a worldwide depression will follow the peak in oil production as high prices drag down the whole world’s economy; and 2) alternate energy sources have to be introduced in a hurry in order to prevent the energy crisis. Embedded in theory is the notion that per capita energy need is increasing globally and will continue to increase because of 1) modernization that inducts more people to the urban energy-intensive lifestyle; and 2) increasing population. Because it is also assumed that oil reserves are limited, it follows that at some point oil production will peak, after which the amount of oil that is being produced will decline. It is believed that once the decline has

started it will become a terminal decline and oil production will never again reach the levels they were at during the peak. When this happens there are going to be serious consequences to the world’s economy. Since the demand for oil is unlikely to decline it inevitably means that the price will increase, probably quite dramatically. This crisis attributed to peak oil theory is proposed to be remedied with 1) austerity measures in order to decrease dependence on energy, possibly decreasing per capita energy consumption; and 2) alternatives to fossil fuel. None of these measures seem appealing because any austerity measure can induce imbalance in the economic system that is dependent on the spending habit of the population and any alternative energy source may prove to be more expensive than fossil fuel. These concerns create panic, which is beneficial to certain energy industries, including biofuel, nuclear, wind, and others. Add to this problem the recent hysteria created based on the premise that oil consumption is the reason behind global warming. This in itself has created opportunities with many sectors engaged in carbon sequestration. The upcoming section of this chapter makes it clear that the underlying premises of the peak oil theory are entirely spurious.

5.4.1 Are Natural Resources Finite and Human Needs Infinite? In economics, the notion of there being infinite need and finite resources is a fundamental premise that is asserted with dogmatic fervor in contemporary economics. In the context of petroleum resources, this notion has to help foment fear that is actually the driver of contemporary economics. This model starts off with the premise that needs must grow continually in order for the economy to thrive. Then, it implies, without looking at the validity of that premise, that there has to be an endless supply of energy to feed it. Because such endless supply contradicts the other premise that natural sources are finite, there arises an inherent contradiction. One such article is written by Mason (2017), who poses this wrongheaded question: “But what happens to that equation when the net amount of energy we extract from the earth is shrinking? How, then, does an economy grow exponentially forever if the one element it needs more than anything to flourish is contracting with time?” Then, he primes the audience with the need of a paradigm shift, that would involve challenging all orthodoxies involving the economy, as if to propose a revolution. Next, he creates a prophet out of a neuroscientist, Chris Martenson, who in recent years has turned his attention to the economy, particularly as it relates to dwindling energy resources and growing debt. Note how the premise of ‘dwindling energy resources’ is imbedded in this ‘revolutionary’ concept. How revolutionary is it? He writes: “He also got rid of most any equity stocks and put his money in gold and silver. He has been labelled a prophet of doom and a survivalist, by some. But more recently, his views have been receiving wider and more serious attention. He has been to Canada to talk to oil and gas investors, of all people. That’s incongruous given his view that we’re pillaging the Earth of its energy resources in the most inefficient and wasteful ways possible.”

Intuitively, it sounds simple – if I use up a certain amount of a finite quantity each year, it will eventually run out. But that tells you that you cannot have constant or increasing resource extraction from a finite resource, it does not tell you anything about what you do with the resources you extract, how productive they are, or whether or not they enable continued economic growth. It is certainly possible to sustain exponential growth infinitely with finite resources, as long as the usage is confined to sustainable or zero-waste operations. Similarly, all solutions end up proposing to minimize waste and maximize profit – an economic euphemism for Utilitarianism that has been preaching ‘maximizing pleasure and minimizing pain’ at a personal level. There has always been plenty of discussion in economics discourse about manipulating the interest rate, but never about eliminating it. There are plenty of suggestions regarding how to minimize waste, but one never proposes a solution to achieve zero-waste. There are even talks about continuously increasing productivity, but never talk about the fundamental assumption of infinite need and finite resource. The notion of ‘The Infinite’ has intrigued humanity for a long time. In ancient civilizations, infinity was not a ‘large number’. It was something external to creation. In other words, only a Creator was considered to be infinite, along with many other traits that could not be part of Creation. However, this ‘infinity’ has nothing to do with the unbounded-ness of nature that has no boundary. Even though the ancient Greeks had a similar concept of infinitude, post-Aquinas Europe developed an entirely different take on infinitude, one highlighted recently by Khan and Islam (2016). In a study published nearly 2 decades ago, Lawrence Lerner, Professor Emeritus in Physics and Astronomy at the University of Chicago, was asked to evaluate how Darwin’s theory of evolution was being taught in each state of the United States (Lerner 2000). In addition to his attempt to find a standard in K-12 teaching, Lerner made some startling revelations. His recommendations created controversy, with many suggesting he was promoting “bad science” in name of “good science.” However, no one singled out another aspect of his finding. He observed that “some Native American tribes consider that their ancestors have lived in the traditional tribal territories forever.” He then equated “forever” with “infinity” and continued his comment stating, “Just as the fundamentalist creationists underestimate the age of the earth by a factor of a million or so, the Black Muslims overestimate by a thousand-fold and the Indians are off by a factor of infinity.” (Lerner 2005). This confusion between “forever” and “infinity” is not new in modern European culture. In the words of Albert Einstein, “There are two things that are infinite, human stupidity and the Universe, and I am not so sure about the Universe.” Even though the word “infinity” emerges from a Latin word, infinitas, meaning “unbounded-ness,” for centuries this word has been applied in situations in which it promotes absurd concepts. In Arabic, the equivalent word (la nahya) means “never-ending.” In Sanskrit, similar words exist (Aseem, meaning ‘no end’) and those words are never used in mathematical terms as a number. This use of infinity to enumerate something (e.g., infinite number of solutions) is considered to be absurd in other cultures. Nature is infinite – in the sense of being all-encompassing – within a closed system that nevertheless lacks any boundaries. Somewhat paradoxically, nature as a system is closed in the

sense of being self-closing. This self-closure property has two aspects. First, everything in a natural environment is used. Absent anthropogenic interventions, conditions of net waste or net surplus would not persist for any meaningful period of time. Secondly, nature’s closed system operates without benefit of, or dependence upon, any internal or external boundaries. Because of this infinite dimension, we may deem nature – considered in net terms as a system overall – to be perfectly balanced. Of course, within any arbitrarily selected finite time period, any part of a natural system may appear out of balance. However, to look at nature’s system without acknowledging all the subtle dependencies that operate at any given moment introduces a bias that distorts any conclusion that is asserted on the basis of such a narrow approach. From where do the imbalance and unsustainability that seem so ubiquitous in the atmosphere, the soil, and the oceans actually originate? As the “most intelligent creation of nature,” men were expected to at least stay out of the natural ecosystem. Einstein might have had doubts about human intelligence or the infinite nature of the Universe, but human history tells us that human beings have always managed to rely on the infinite nature of nature. From Central American Mayans to Egyptian Pharaohs, from Chinese Huns to the Manichaeans of Persia, and from the Edomites of the Petra Valley to the Indus Valley civilization of the Asian subcontinent, all managed to remain in harmony with nature. They were not necessarily free from practices that we no longer consider (Pharaohs sacrificed humans to accompany the dead royal for the resurrection day), but they did not produce a single gram of an inherently anti-nature product, such as DDT. In modern times, we have managed to give a Nobel Prize (in medicine) for that invention. Islam et al. (2010, 2012) and Khan and Islam (2016) have presented detailed accounts of how our ancestors dealt with energy needs and the knowledge they possessed that is absent in today’s world. Regardless of the technology these ancient civilizations lacked that many might look for today, our ancestors were concerned with not developing technologies that might undo or otherwise threaten the perceived balance of nature that, today, seems desirable and worth emulating. Nature remains and will remain truly sustainable.

5.4.2 The Peak Oil Theory and its Connections to Population and Lifestyle As is well known, the current version of the peak oil theory emerged from King Hubbert’s 1956 presentation entitled “Nuclear Energy and the Fossil Fuels” (Hubbert, 1956). The peak oil theory says that for any given geographical area, from an individual oil-producing region to the planet as a whole, the rate of petroleum production follows a bell-shaped curve. This bell curve (Figure 5.13) that has been the basis of many theories popularized in almost all disciplines of modern era dictates that there would be a global peak in oil recovery, following which oil production would decline irreversibly and monotonically.

Figure 5.13 The bell curve has been the base curve of many theories in modern era (x-axis is replaced with time and y-axis with global oil production). This graph represents a typical increase in oil production, eventually reaching the peak. The increase occurs because of the global demand of energy that is dependent on world population as well as globalization that is equivalent to urbanization that increases per capita energy consumption. Figure 5.14 shows population growth since 1800. The actual data collection started in 1940’s, therefore data from before is an assumption. The figure also shows prediction until nearly 2100. This prediction is based on a United Nations medium projection. In fact, previous models based their guesses about population growth on assumptions similar to those of the peak oil theory.3

Figure 5.14 Population growth history and projection (data from CIA Factbook, UN). In as early as 2006, the United Nations stated that the rate of population growth was visibly

diminishing due to the ongoing global demographic transition. With that trend in the rate of population growth, it was foreseen that the rate of growth may diminish to zero by 2050, concurrent with a world population plateau of 9.2 billion (UN 2010). However, this is only one of many estimates published by the UN. Each report was based on different sets of assumptions, thereby leading to different projections. In 2009, UN population projections for 2050 ranged between around 8 billion and 10.5 billion (UN 2009). None of these projections include a negative growth even though some predict zero growth, leading to some sort of pseudo stable population. This is an exception because all other growth models dealing with natural systems have a bell-shaped curve as the foundation. Randers (2012) is the first reported one to predict negative growth after the global population reaches a plateau. This alternative scenario is a result of the argument that all existing projections insufficiently take into account the downward impact of global urbanization on fertility. Randers introduces population decline as the only outcome of urbanization the rate of which overshadows any other factor. Randers’ “most likely scenario” reveals a peak in the world population in the early 2040s at about 8.1 billion people, followed by decline (Randers, 2012). Three different projections are plotted in Figure 5.15.

Figure 5.15 Estimated, actual, and projected population growth (decline). While researchers have attempted to characterize global population growth with little consensus, there appears to be definite trends in various geographical locations. For instance, Figure 5.16 shows how the population has grown and declined in various geographical enclaves since 1950, the time when actual data became available. The y-axis shows population in millions. Clearly, Europe exhibited the lowest growth rate from 1950’s, with a declining rate of growth as early as 1960’s. This resulted in reaching a plateau in 1990, following

continuous decline in population, despite surge in immigrant population. To a certain extent, it can be said that Europe represents the model that forms the basis of practically all resource models, used in modern time, including the peak oil theory.

Figure 5.16 World population growth for different continents. Individually, Africa shows the highest rate of growth among all continents. Asia and Latin America started off with similar growth rate, but Asia and then Latin America exhibited slowing growth rate, matching with that of North America. Of course, North America that includes Canada would actually have a negative growth if it was not for the very high immigration rate. The growth rate started to decline in Asia in 1980’s. It was the result of a sustained campaign of birth control in some of the most populous countries in the world, namely, China, India, Pakistan, India, Bangladesh, etc. This campaign started in the 1960’s but the results began to show up in 1980’s. During the same period, the campaign of urbanization also began. Fueled with ‘green revolution’ that also took effect starting 1960’s, urbanization has been in full gear, the most direct outcome of which has been the per capita energy consumption. This aspect will be discussed later. Figure 5.17 shows how the population in more developed countries reached a plateau while that of less-developed countries continued to grow, albeit with a slowed rate. In terms global energy need, this figure presents an interesting divide. In average, the energy consumption per capita of the ‘less-developed countries’ is an order of magnitude less than that of ‘moredeveloped countries’. In mathematical terms, it means the world has a capacity of sustaining energy needs of the majority of the population even if the population is increased 10-fold. In practical terms, it means that if we could contain the per capita energy consumption, we would have no worries about natural population growth. Indeed, the energy consumption of the ‘more developed countries’ has been contained. In last 20 years, the most populous ‘developed

country’, the USA has shown practically constant per capita energy consumption. The USA is an important case as this country personifies global trend in terms of energy consumption. Historically, the USA has set standards for all aspects of technology development and other tangible aspects of civilization for a duration that has been synonymous with petroleum golden era – i.e., whatever it does today is emulated by the rest of the world in years to come. Table 5.2 shows per capita energy consumption (in tons of oil equivalent per year) of the USA in the last few decades, along with predictions for 2015. In this, Canada represents an interesting case. Canada follows the USA’s trend closely in matters of per capita energy consumption but falls far behind in matters of population growth, expenditure in research and development (particularly in energy and environment), expenditure in defense and pharmaceutical industries, and other long-term economic stimuli. Japan, on the other hand represents other extremity of the energy consciousness spectrum. As can be seen in Table 5.2, Japan maintains steady per capita energy consumption at almost half the value of that of Canada. At the same time, Japan has maintained very high relative investment in education and research and development. However, Japan’s population has been dropping, or keeping pace with Europe and unlike the USA. Canada’s population growth has been a mix of Europe/Japan (decline) and USA (mild growth). The difficulty involved in maintaining a balance between urbanization and per capita energy consumption is most sternly manifested in the case of Saudi Arabia. Both Germany and Russia show mild per capita energy consumption, signaling prudent usage of energy sources and high energy efficiency. Saudi Arabia is a ‘developing country’ in all measures except that it is projected to be the most energy-consuming country in the world by 2015. In as early as 1995, it exceeded the per capita energy consumption of Russia and Germany and is slated to exceed that of USA by 2015. Saudi Arabia represents the global trend by ‘developing countries’ to emulate the wasteful habits of the USA while shunning positive aspects of USA in the areas of economic growth, education or research and development. This trend of Saudi Arabia is alarming and is a trademark of global obsession with wasteful energy habits. Saudi Arabia is just an example of this obsession that is all pervasive in the developing countries as can be seen in Figure 5.18.

Figure 5.17 There are different trends in population growth depending on the state of the economy.

Table 5.2 Per capita energy consumption (in TOE) for certain countries. Countries USA Canada Japan

1990 7.7 7.5 3.6

1995 7.8 7.9 4.0

2000 8.2 8.1 4.1

2005 7.9 8.4 4.1

2010 7.3 7.6 3.7

2015 7.3 7.6 3.9

Germany 4.4 Russia 5.9 Saudi Arabia 3.9

4.1 4.3 4.8

4.1 4.2 5.1

4.1 4.5 6.0

4.0 4.8 6.6

3.8 5.5 7.7

China India Indonesia Sri Lanka

0.9 0.4 0.7 0.3

0.9 0.5 0.7 0.4

1.3 0.5 0.8 0.5

1.8 0.6 0.9 0.5

2.2 0.7 1.2 0.6

0.8 0.4 0.6 0.3

Figure 5.18 Per capita energy consumption growth for certain countries. Figure 5.18 shows the growth in per capita energy consumption for some key countries that are not characterized as ‘more developed countries’. These countries all had very modest per capita energy needs in 1990. However, they all show exponential growth in energy needs in the last two decades. China leads the pack with the highest growth in energy needs. It nearly triples the energy need in 25 years. This trend shows that China could have dealt with its ‘population crisis’ by keeping the per capita energy consumption in check. This would have avoided many shortcomings of the one-child policy that China has imposed on its population for decades. Similar growth is shown by Indonesia – another country that attempted to decrease its population rather while increasing per capita energy needs. Over the two decades, Indonesia has doubled its per capita energy consumption. India has shown restraints in per capita energy consumption. While this is the case, its per capita energy consumption has doubled during the decades of concern. Sri Lanka has been the lowest energy consuming country (from the list of countries) but still maintains growth very similar to India and Indonesia.

It has been recognized for some time that there is a strong correlation between per capita energy need and GNP (as well as GDP). Over the last 30 years, the average consumption of the global ‘South’ has been nearly an order-of-magnitude less than that of the ‘West’ (Goldemberg et al., 1985; Khan and Islam, 2012). As the West has been trying to boost its population and contain its per capita energy consumption, while increasing its GNP, the ‘south’ has been trying to contain its population while increasing the per capita energy consumption as well as GNP. These contradictory measures have created confusions in both the west and the ‘south’. This is most visible in the definition of GNP and GDP that reward an economy for increasing wasteful habits (e.g. per capita energy consumption). This contradiction has been discussed by Khan and Islam (2007), who introduced new techniques for measuring economic growth that could take account of true sustainability. They showed that true sustainability would increase GNP by increasing efficiency (rather than increasing per capita energy consumption). Figure 5.19 shows how energy consumption has become synonymous with the concept of societal weldate, as expressed as tangible expression of the ‘quality of life’.4 Goldenberg et al. (1985) correlated per capita energy consumption with a Physical Quality of Life Index (PQLI), which is an attempt to measure the quality of life or well-being of a country. The value is the average of three statistical data sets: basic literacy rate, infant mortality, and life expectancy at age one, all equally weighted on a 0 to 100 scale. It was developed for the Overseas Development Council in the mid-1970s by Morris David Morris, as one of a number of measures created due to dissatisfaction with the use of GNP as an indicator of development. PQLI is best described as the measure of tangible features of the society, not unlike GDP (Khan and Islam, 2007). Ever since, numerous other indices have been proposed, including more recently developed Happiness index, but they all suffer from similar short-comings, i.e., focus on tangibles, as outlined by Khan and Islam (2012 and Zatzman, 2012, 2013). The following steps are used to calculate Physical Quality of Life: 1. Find the percentage of the population that is literate (literacy rate). 2. Find the infant mortality rate (out of 1000 births). INDEXED Infant Mortality Rate = (166 – infant mortality) × 0.625 3. Find the Life Expectancy. INDEXED Life Expectancy = (Life expectancy – 42) × 2.7 4. Physical Quality of Life = (Literacy Rate + INDEXED Infant Mortality Rate + INDEXED Life Expectancy)/3.

Figure 5.19 A strong correlation between a tangible index and per capita energy consumption has been at the core of economic development (from Goldenberg, 1985). This trend goes back to the earliest times of the Industrial Revolution more than two-and-a-half centuries ago. Khan and Islam (2012) discussed the mindset that promoted such wasteful habits in all disciplines. Figure 5.20 summarizes the dilemma. At the dawn of the industrial age, civilization began to be defined by consumption and wasteful habits. As the population grew, the energy consumption per capita should have been decreased in order compensate for the increasing energy demand. This would be in line with the claim that industrialization had increased human efficiency.

Figure 5.20 While population growth has been tagged as the source of economic crisis, wasteful habits have been promoted in name of emulating the west. The opposite happened in the developed countries. For centuries, the per capita energy consumption increased, along with dependence on mechanization. It only stabilized in 1990s. By then, the population growth in the west has been arrested and have been declining in most part (the exception being USA). This population and energy paradox was further accentuated by encouraging the developing countries to emulate the west in wasteful habits. In every country, consumption per capita increased with time as a direct result of colonialism and imposed culture that is obsessed with externals and short-term gains. As a result, a very sharp increase in per capita energy consumption took place in the developing countries. As can be

seen from Table 5.2, even with such increase, the “south” has not caught up with the “west”, with the exception of some petroleum-rich countries. A major case in point here is China. For the last two decades, it attempted to curtail its population growth with a one-child per family law. The current Chinese government at the behest of the latest congress of the Communist Party of China has now repudiated this policy as practically unenforceable. Furthermore and even more interesting, however is that Figure 5.21 shows that the population growth has in fact been dwarfed by the increase in per capita energy consumption. A similar conclusion emerges from the comparable statistical profile for the Indian subcontinent, where infanticide and female-selective abortion is in order to boost male population in favor of female population that is considered to be a drain to the economy. This finding is meaningful considering India and China hold one third of the world population and can effectively change the global energy outlook either in favor or against sustainability.

Figure 5.21 Population and energy paradox for China (From Speight and Islam, 2016). In order to change the above trend, and address the population and energy paradox, several indices have been introduced. These indices measure happiness in holistic terms. Comparing one person’s level of happiness to another’s is problematic, given how, by its very nature, reported happiness is subjective. Comparing happiness across cultures is even more complicated. Researchers in the field of “happiness economics” have been exploring possible methods of measuring happiness both individually and across cultures and have found that cross-sections of large data samples across nations and time demonstrate “patterns” in the determinants of happiness. The New Economics Foundation was the first one to introduce the term “Happiness index” in mid 2000’s (Khan and Islam, 2007; White, 2007). In first ever ranking, Bangladesh, one of the poorest nations of the time was found to be the happiest among some 150 countries surveyed. At that time, Bangladesh was among the lowest GDP countries along with very low per capita energy consumption. This study demonstrated that happiness is in fact inversely proportional to per capita energy consumption or GDP. Before, this study would set any trend globally in terms of energy policies, a number of similar happiness indices

were introduced in succession, all showing a direct, albeit broad, correlation between GDP and happiness. One such index is the Happy Planet Index (HPI) that ranks 151 countries across the globe on the basis of how many long, happy and sustainable lives they provide for the people that live in them per unit of environmental output. It represents the efficiency with which countries convert the earth’s finite resources into well being experienced by their citizens. The Global HPI incorporates three separate indicators: a. ecological footprint: the amount of land needed to provide for all their resource requirements plus the amount of vegetated land needed to absorb all their CO2 emissions and the CO2 emissions embodied in the products they consume; b. life satisfaction: health as well as “subjective well-being” components, such as a sense of individual vitality, opportunities to undertake meaningful, engaging activities, inner resources that help one cope when things go wrong, close relationships with friends and family, and belonging to a wider community; c. life expectancy: included is the child death, but not death at birth or abortions. The first item couples CO2 emission levels with the carbon footprint measure. This emission relates only to fossil fuel usage, and does not take in account the fact that CO2 that is emitted from refined oil is inherently tainted with catalysts that are added during the refining process. This creates bias against fossil fuels and obscures the possibility of finding any remedy to the energy crisis. The Organisation for Economic Co-operation and Development (OECD) introduced the Better Life Index. It includes 11 topics that the OECD has identified as essential to wellbeing in terms of material living conditions (housing, income, jobs) and the quality of life (community, education, environment, governance, health, life satisfaction, safety and work-life balance). It then allows users to interact with the findings and rate the topics against each other to construct different rankings of wellbeing depending on which topic is weighted more heavily. For the purpose of this analysis, what matters is the Life Satisfaction survey. Life satisfaction is a measure of how people evaluate the entirety of their life and not simply their feelings at the time of the survey. The OECD study asks people to rate their own life satisfaction on a scale of 0 to 10. The ranking covers the organization’s 34 member countries plus Brazil and Russia. The Happy Planet Index ranked Costa Rica as the happiest country in 2012. The particularly high score relates to high life expectancy and overall wellbeing. Vietnam and Colombia follow in second and third place. Of the top ten countries, nine are from Latin America and the Caribbean. Countries from Africa and the Middle East dominate the bottom of the ranking instead. Botswana is last after Bahrain, Mali, the Central African Republic, Qatar and Chad. Developed nations such as the United States and the European Union member countries tend to score high on life expectancy, medium-to-high in wellbeing, but rather low on their ecological footprint, which puts them in the ranking’s second-tier.

5.4.3 Evidence in Favor of the Peak Oil Theory Hubbert’s Peak was thought to have been achieved in the continental US in the early 1970s. Oil

production peaked at 10,200,000 barrels per day (1,620,000 m3/d) and then declined for several years since. This purported evidence for Hubbert’s peak seemed to dissipate as Alaska’s Prudhoe Bay came into production.5 However, Alaskan production was not enough to sustain a longer-term production growth and overall oil production declined steadily until recent oil boom of unconventional oil and gas. Figure 5.22 shows US crude oil production, along with crude oil import. As can be seen from Figure 5.22, until the early 1970s, oil production continued to rise monotonically even when oil import rose sharply. As oil production reached a peak, a peak in oil import followed. This was a time in which the United States was increasing its oil reserve. Oil imports dropped sharply after Iranian revolution and did not recover its pre-Iranian revolution import number until mid 1990s. During this period, the world oil price was exceptionally low and dividends from the end of the Cold War were affecting the US economy positively. Oil import continued to rise until the 2008 financial crisis and the sudden hike in the oil price at that same point.

Figure 5.22 Oil production and import history of USA (data from EIA). Plotting Hubbert’s prediction of US oil against observed production, there appears to be reasonable agreement. Figure 5.23 shows Hubbert’s prediction against observed data in USA.

Figure 5.23 US data that appear to support Hubbert’s “peak oil” hypothesis (From Speight and Islam, 2016). US EIA data show average oil production rose further to a level of 900,000 barrels a day. This surge was due to unconventional oil and gas, mainly enhanced through fracking technology as well as horizontal wells. Together, they have unlocked deposits of oil and gas trapped in formations previously thought to be unreachable. Kashi (2013) reports that the USA is now the leading oil and gas producer of the world. Recent growth in the USA domestic oil production is unprecedented in history (Fowler, 2013). A government report published in summer of 2013 revealed that U.S. domestic crude-oil production exceeded imports for the first time in 16 years (Smith, 2013). The surge in US domestic oil production meant the expansion of previously conventional oil reservoirs to include low-permeability formations, such as West Texas’s Permian basin. In addition, huge expansions in areas that had been lightly tapped in the past also became available, such as North Dakota’s Bakken shale region. The Bakken has gone from producing just 125,000 barrels of oil a day five years ago to nearly 750,000 barrels a day in 2013.6 There have been efforts to demonstrate that every country is undergoing peak oil behavior. Figure 5.24 shows such plot for Norway. A society, called Association for the Study of Peak Oil (ASPO), lists the world oil peak in various times, matching with Hubbert prediction. Figure 5.25 shows regional production history (until 2007) and prediction (2007+), all regions showing a Hubbert peak.

Figure 5.24 Comparison of Hubbert curve with Norwegian oil production (from Speight and Islam, 2016).

Figure 5.25 Association for the study of peak oil (ASPO) produced evidence of Hubbert peak in all regions. ASPO makes the following argument based on Figure 5.25 (ASPO):

This graph is worth careful as a lot of world history is written into it. Note the steep rise in oil production after World War II. Note that 1971 was the peak in oil production in the United States lower 48. There is a sliver of white labled Arctic oil. That is mostly Alaskan Prudhoe Bay oil, which peaked in 1990. Prudhoe Bay was almost big enough to counteract the lower 48 peak of 1971. The sliver is very narrow now. The OPEC oil embargo of 1973 is very visible. The oil produced by non-OPEC countries stayed nearly constant while OPEC production nearly halved. The embargo caused the world economy to slow. But the high cost of energy spurred the development of energy efficient automobiles and refrigerators and a lot of other things. Note the effect of the collapse of the Russian economy in 1990 on Russian oil production. Note the rapid increase in oil production when the world economy boomed near the end of the twentieth century. Oil was $12 a barrel at that time. Note that European (North Sea) oil peaked in 2000. Note especially what would have happened if the 1973 embargo had not occurred. It is possible that the world would now be on the steep part of the right side of the Hubbert curve. ASPO predicted this drastic looming in 2010. They wrote: Beginning in 2010, the Middle East can no longer compensate for declining production elsewhere. Production and therefore consumption will decrease and demand will have little effect on production. The decrease results from oil depletion and the realities of geology. Large increases in the price of oil will not greatly increase production but would reduce demand. Countries with no alternatives to oil will be forced to bid up the price of oil. As stated earlier, unconventional oil production created havoc for the Hubbert curve. Figure 5.26 shows the world oil production did not reach a peak in 2008 as predicted.

Figure 5.26 Actual global oil production (surface mined tar sand not included).

5.4.4 Historical Background: Foundations of Peak Oil Theory

A critical review of existing theories (scientific as well as social) demonstrates that the most important shortcomings of these theories are to be found in their fundamental premises. To begin with, the core of the peak oil theory relies on elements from the unstated assumptions of a number of earlier theoretical expressions dealing with the most fundamental and most commonly-found features of conventional oil drilling. These include mathematically formulated theories that assume certain fundamental and more or less unchanging features common to just about any enterprise undertaken in the context of generating an average profit from the most rational exploitation of some resource. A second unstated assumption is that the resource is privately-owned. A third unstated assumption is that the bed in which it has been found is itself also privately-owned. Tied to the second and third assumptions just mentioned is the assumption that the findings of any theory incorporating these two particular assumptions will be more or less universally or even generally applicable, and hence of broader theoretical value. How can that be? In the countries of the Middle East that continue to dominate global oil production, as distinct from the United States, is it possible or even likely let alone probable that the land of the well-site is privately-owned and the resources to be captured in those lands can belong privately to a third entity (person or corporation)? The so-called “law of capture” in the United States makes it possible for the oil to be owned separately by some entity or individual different from the owner of the land on which the well is drilling. Such a distinction does not exist in Saudi Arabia, the Islamic Republic of Iran, the Kingdom of Qatar, Algeria, Libya, or indeed any other country on earth. Probably the single-most egregious unstated assumption, however, is that every oil producer is responding to its own view of market conditions within a framework of more-or-less free competition. Of course, such a thing would entail an absence of collusion among groups of producers to game the market sometimes in favor of this cartel member, at other times in favor of that cartel member. “Collusion”, however, is the middle name of everyone participating in this game, and — in the conditions of monopolistic competition that characterize relationships among members of a cartel — that simply can no longer happen is the so-called “invisible hand” of Adam Smith, according to which profits become distributed in proportion each party’s invested capital over some finite period of time. Production itself, meanwhile, is assumed to be constrained on two principal fronts. It is assumed to be constrained first of all by the profit-generating potential of prevailing market incentives. It is also assumed to be constrained by the expected lifetime of the surrounding resource basin, and especially the median depth to be drilled for any individual well to reach the ‘pay zone’.7 Is it, however, a theory, i.e., something of a wider or more general application than the result of conventional statistical analysis of a particular dataset? “Hubbert’s peak” is valid or potentially observable as a meaningful and correct generalization if and only if at least some of the aforementioned unstated assumptions happen to be operative. Considered from this standpoint, it seems misleading to dignify such an observation with the rather high-sounding label of “theory.” As for the theoretical foundations of the peak oil theory, some prefatory remarks are needed.

One reaction comes along the lines of “what could theories about the nature of development of the broader social order have to do with a summary of patterns uncovered in oil-well drilling data from the petroleum sector of the economy of the United States collected and summarized in the middle third of the 20th century?” This is a very good question. The starting-point of an answer is to point out that the economic order and how it works in any of its parts — such as Big Oil —links and to a large extent organizes many seemingly diverse and unrelated parts of the social order. Here the relevant question, i.e., the question that compels examining previous and existing social theory in order to approach an answer, is: does the production output data examined and summarized in “Hubbert’s peak” depend fundamentally on the capacities of nature, or does it depend rather on the cupidity of certain man-made human-guided economic organizations? To even hint at the notion of, much less assign any responsibility to, “the cupidity of certain man-made humanguided economic organizations” is instantly rejected in knee-jerk manner as the intellectual equivalent of defecating in church and a complete outrage: the one case that must be resolutely excluded from any consideration. Since class-interested “outrage” hardly constitutes a serious intellectual argument, however, the next response is to summon some part of social theory that supports the writer/researcher’s personal bias: cue “social theory”. The specific trend in social theory that is summoned to backstop the peak-oil argument is the trend that says or assumes that humans and their behaviour operate at all times and places partly at the mercy of individuals’ instincts and partly in relation to certain alleged limits of the natural order. Some selection of social theories based on or derived from the writings of the Rev. Thomas Malthus regarding food and population are then invoked, with Malthus famously arguing that it is part of man’s condition that humans reproduce in numbers and at a rate that exceeds the possibilities of food production, leading to a cycle of famine followed by population decreases (Malthus, 1798). The term “social science” first appeared in the 1824 book An Inquiry into the Principles of the Distribution of Wealth Most Conducive to Human Happiness. This ‘happiness’ applied to the Newly Proposed System of Voluntary Equality of Wealth by William Thompson (1775– 1833). Economic theory as such meanwhile played little or no role in the development of what the world of the 19th century considered to be “social science”. Only with the rise in Germany, France, Britain and the United States of a consciously-intended research effort to “expose” Karl Marx as a lying ill-intended fraud would the study of and research into economics broadened to fit existing social theory and especially theories of how societies develop.8 Some of this theorizing about society as some species of organic development (rather than some random accident) began a generation before Marx, in the works of Auguste Comte (1797–1857). Comte set the cat amongst the pigeons with his notion that ideas passed through three rising stages, theological, philosophical and scientific. He defined the difference as the first being rooted in assumption, the second in critical thinking, and the third in positive observation. Comte’s aim was to establish a framework in which to account scientifically for such phenomena as the republican upsurge that created the revolution in France in 1789. In the

meantime, somewhat unexpectedly, this framework — although still rejected by many — also seemed to fit nicely with an entirely unrelated strand of thought. This was the strand that would push economic study away from the path of a descriptive discipline and onto the path of a mathematically-based discipline. Comte and his German near-contemporary, Georg Wilhelm Friedrich Hegel, exercised little or no influence on the thinking of the other. Nevertheless, Karl Marx would deeply excavate Hegel’s radical reconceptualization of the dialectic of ancient Greek philosophical school of Zeno to eventually conclude that the proper study of history — alongside the accepted traditional methods of documentary analysis — would and should be supplemented by some of the methods developed in the physical sciences to better specify the context of an historical question. Marx and his decades-long political and intellectual companion Frederick Engels jointly elaborated the basic principles of such a methodical approach under the name “dialectical and historical materialism.”9

Figure 5.27 Petroleum is the driver of world economy and driven by political events (data from EIA, 2017).

5.4.5 Deconstruction of Peak Oil Theory This theory advances the concept that after fossil fuel reserves (oil reserves, coal reserves, and natural gas reserves) are discovered, production at first increases approximately exponentially, as more extraction commences and more efficient facilities are installed. This is the first assumption. It is based on the following premises: i) fossil fuel production technology is uniform and constant, with the same impact on the resource irrespective of the resource type and the nature of the technology; and ii) demand for and need of energy is predetermined and independent of any intangible factors, including politics, social and environmental constraints. Assumption i) is incorrect and demonstratively false (Islam, 2014), and will not be further

discussed since it is not the subject of the chapter. With the correct assumption, the fundamental tenet of the peak oil theory breaks down. Assumption ii) needs some elaboration before deconstruction, and concerns the central theme of the book: sustainable economics. 5.4.5.1 Background of the Petroleum Industry The oil industry is past middle age in a business life cycle sense and is at a crossroads. Darcy’s law that was proposed over 150 years ago still remains the only viable model used in all flow prediction schemes. As faster and bigger computers are developed, bigger and more robust reservoir simulators with all features of video games are deployed – without adding a single component that would describe the physics of the process any better than the one used a century ago. Two decades ago, using million blocks was the most sought-after feature of a reservoir simulator. Now, oil companies brag about using a billion-block simulator, forgetting that this probably means increasing poorer reflection of the physics of the system. If science and engineering is this pitiful, how is it in management and energy pricing? Accounting principles from the eighteen hundreds continue to teach that the lowest cost producer will make the highest profit and therefore will be the most successful. With this lesson in mind, the oil companies are preoccupied with cost cutting, curtailing research and development, gaining economies of scale, and consolidating through mergers and buy-outs. Cadres of business consultants are hired to install governance and total quality management programs. As these programs permeate the industry, returns sink to the same low level. In other words, there are numerous competing companies selling plentiful volumes of oil and gas while constantly bidding against each other for operating rights and product sales. No amount of cost cutting and further reduction in research can stop this spiral down movement. It is by convention that economic theories are announced or explained as final finished products. However, in reality they are anything but. They express ideological and political priorities of the ruling forces of the establishment in the short term at various times in response to various pressures. Thus, the long-standing current defense of the conventional establishment economic theory takes the form of an argument to the effect that, so long as all economic players act according to their self-interest in the marketplace either as a buyer or seller of commodities, they will each maximize their own satisfaction. This version of conventional theory replaced an earlier version that had declared that the marketplace was guided by an “invisible hand.” This supposedly maximized the satisfactions of buyers and sellers, so long as neither buyers nor sellers combined to restrain the freedom of the other in the marketplace and so long as the government resisted all opportunities to interfere in the operations of the marketplace. If all these conditions were met, all markets would clear at equilibrium prices, as shown in Figure 5.28, and there would be no danger of overproduction or underconsumption. Subsequently in the Great Depression of the 1930s, the emergence of vast concentrations of ownership and production disastrously confirmed the validity of all the earlier warnings against sellers of finished goods combining in the marketplace. It also demonstrated conclusively that, once such monopolies emerged, overproduction had become endemic to the short term and long term of the economy. This, in turn, greatly strengthened arguments in favor of reorganizing production for the long term on a

very different basis. The new basis proposed eliminating the capture of surpluses and profits as the main and sole driver of economic development and investment, either in the short term or the long term.

Figure 5.28 Production-Cost and Market-Price Realities “At The Margin” (From Zatzman and Islam, 2007). Almost mesmerizing in its simplicity, conventional theory tackles the production system as given for any commodity. The graph depicts the resulting situation provided that there are no interdependencies, all competing suppliers are in the market on an equal basis, and a current demand for any good is met entirely and only by its current supply. Once a market is filled, whether by a quasi-monopoly supplier, a cartel, or all competitive suppliers, conventional economic theory asserts that it also “clears.” All goods that can be sold have been exchanged for money, and the production-consumption cycle is then renewed. Reality demonstrates otherwise. Once actual total production has taken place, some proportion, which may increase over time, becomes stockpiled. As time passes, this surplus could be well in advance of current demand. Market demand, meanwhile, advances at rates far below this rate of increase in total production. In such a scenario, suppliers’ costs are transformed from input costs framed by the exigencies of actual external competition into “transfer prices” between different levels of an increasingly vertically integrated structure of production and marketing. Consumers’ costs then become predetermined in accordance with arrangements between owners of the forces of material production and owners or operators of wholesale and/or retail distribution networks.

5.4.5.2 Equity Shoulders Debt This section challenges the assumption that equity shoulders debt, which is significant for certain assumptions that Peak Oil Theory makes. Another old accounting principle concludes that equity shoulders debt, and therefore that growth is limited by the ability of a company to raise capital. Such model has dictated the accumulated of public debt in the United States (Figure 5.29). However, such s model is no longer valid in the information age. Cisco Systems gained a market capitalization of about one-half trillion dollars in 16 years, increased sales by 50-fold, has essentially no debt and has not sold any stock during the decade of 1990–2000. When an order is placed, it goes directly to the supplier who builds the unit and ships it directly to the buyer. Cisco invoices the customer, keeps the markup and pays the supplier. A few additional examples of decapitalized companies are Dell, Qualcomm, and Microsoft. The bottom line is that capital, like land, is losing its place as a governing metric for wealth creation.

Figure 5.29 US public debt as percentage of GDP. Change is inevitable in this situation. The trend seems to be toward large oil and gas companies acting more like investment bankers. They have strong ties to host governments, the World Bank and other international credit and lending banks, to other oil companies, and to oil service and construction companies. In this mode, the large oil companies bring together the various parties that are needed to develop new oil and gas fields, build pipelines, organize tanker transport, construct and operate refineries and chemical plants, distribute and market the products to consumers. Either before or soon after finding new reserves or establishing production from new fields, significant ownership portions are sold to competitors to recover initial investments and to limit risk. Retained partial ownership in the newly developed fields provides continuing cash flow to fund new investments. Partnering with competitors disseminates technology and limits risk. It also limits the opportunity to dominate industry segments via innovative technology application and therefore reduces the desire to fund research and development. Oil companies becoming investment bankers have some similarities with the Cisco model. Although new oil and gas field development is highly capital intensive, the oil companies as investment bankers or virtual oil companies can spread the capital burden widely. As the model expands to allow suppliers, service and construction companies to partner in new fields, further risk and capital spreading is possible. In fact, one scenario can be imagined

whereby substantially less direct capital is required. Here, all development needs such as exploration, drilling, production and treating facilities, and transport are “capitalized” by produced volume sharing. Business to business (B2B) methods can be included in this model to further reduce capital and operating costs, and to generate revenue. However, the sad reality in all these is the import ‘model’ from Cisco to petroleum industry or any other application cannot be done and independent research must be done to develop its own solution that befits the information age. Cutting research money or selling consulting expenses in name of research to plunder public funds in name of tax benefit will not solve the problem. What oil companies are engaged in is similar to what a man in the middle ages does – he still likes glamour of the past but cannot think of a way to maintain that level of energy. Rather than being creative about it and finding a new solution, he engages in making cosmetic changes, coloring hairs, even resorting to plastic surgery. All these make him look more like a cadaver than a young energetic man. If someone points this out to him, he screams at him like a grumpy old man. I have observed many CEO’s of oil companies – all behave this way when it comes to making company decisions. Of course, there are some exceptions, but they are too scared to make a row with the establishment. Few, if any, have the courage to do things for the sake of overall good. Everyone is trying to look good today, lest there is no tomorrow for them.

5.4.6 The finite/ infinite conundrum The next assumption of peak oil theory is that the oil reserve is finite. The theory first assumes the ultimate recoverable reserve, then expresses cumulative oil production as a function of the ultimate recoverable reserve. Cavallo (2004) defines the Hubbert curve used to predict the U.S. peak as the derivative of: [Eq. 5.1] Where Q(t) is the cumulative oil production and Qmax is the maximum producible reserve and a and b are constants. The year of maximum annual production (peak) then back is calculated as: [Eq. 5.2] The fixation of Qmax is in the core of the Hubbert curve. Theoretically, the recoverable reserve increases for two reasons: 1) the boundary of resource; 2) the technology. As discussed in earlier sections, the boundary of resource is continuously moving. The recent surge in unconventional oil and gas reserve makes an excellent point to this regard. In fact, the following section makes the argument that this boundary is fictitious and for a sustainable recovery scheme, this boundary should not exist. The second reason for the reserve to grow is the technology that becomes applicable to a broader resource base. The earlier section on EOR makes the argument that EOR schemes alone can continue to increase the reserve and has done so in the past.

There is a general misconception that Hubbert was concerned with “easy” oil, “easy” metals, and so forth that could be recovered without greatly advanced mining efforts and how to time the necessity of such resource acquisition advancements or substitutions by knowing an “easy” resource’s probable peak. The difficulty of Hubbert curve is not its assumption that easy oil recovery is constant, it is rather the notion that a resource that turns into reserve with time is finite. As shown in previous sections, accessing greater resource bases is not a matter of ‘more difficult’ technology, it is rather a matter of producing with sustainable techniques. 5.4.6.1 Renewable vs Non-Renewable: No Boundary-As-Such Chhetri and Islam (2008) elaborated the notion that the ‘finite resource’ is not scientific. With sustainable recovery tools, resources are infinite and are part of the continuous cycle. Figure 5.30 shows that as the natural processing time increases, the energy content of the natural fuels increases from wood to natural gas. The average energy value of wood is 18 MJ/kg (Hall, and Overend, 1987) and energy content of coal, oil and natural gas are 39.3MJ/kg, 53.6MJ/kg and 51.6MJ.kg, respectively (Website 4). Moreover, this shows that the renewable and nonrenewable energy sources have no boundary. It is true that solar, geothermal, hydro and wind sources are being renewed at every second based on the global natural cycle. The fossil fuel sources are solar energy stored by the trees in the form of carbon and due to the temperature and pressure, they emerge as coal, oil or natural gas after millions of years. Biomass is renewed from a few days to a few hundred years (as a tree can live up to several hundred years). These processes continue forever. There is not a single point where fossil fuel has started or stopped its formation. So, why these fuels are called non-renewable? The current technology development mode is based on a short-term approach as our solution of the problems start with the basic assumption that ‘∆t tends to =0’. Only technologies that fulfill the criterion of time approaching infinity are sustainable (Khan and Islam, 2007). The only problem with fossil fuel technology is that they are rendered toxic after they are refined using high heat, toxic chemicals and catalysts.

Figure 5.30 Energy content of different fuels (MJ/kg), from Spight and Islam, 2016. From the above discussion, it is clear that fossil fuels can contribute to a significant amount of energy by 2050. It is widely considered that fossil fuels will be used up soon. However, there

are still huge reserves of fossil fuel. The current estimation on the total reserves is based on the exploration to-date. If one assumes a priori that reserves are declining with time (Figure 5.31a), one fails to see the role of exploration and drilling activities. As the number of drillings or exploration activities increases, more recoverable reserves can be found (Figure 5.31c). In fact, Figure 5.31 is equally valid if the abscissa is replaced by ‘time’ and ordinate is replaced by ‘exploratory drillings’ (Figure 5.31b). For every energy source, more exploration will lead to a larger fuel reserve. This relationship makes the reserve of any fuel type truly infinity, and alone can be used as a basis for developing technologies that exploit local energy sources.

Figure 5.31 Fossil fuel reserves and exploration activities. The US oil and natural gas reserves reported by the EIA consistently show that the reserves over the years have increased (Table 5.3 gives a sampler). These additional reserves were estimated after the analysis of geological and engineering data. Hence, based on currently observed patterns, as the number of exploration increases, the reserves will also increase. Table 5.3 US crude oil and natural gas reserve (Million barrels). Year Reserve Crude Oil Reserve 1998 21,034 1999 217,65 2000 22,045 2001 22,446 Natural Gas 1998 164,041 1999 167,406 2000 177,427 2001 183,460

% Increment 3.5% 1.3% 1.8% 2.1% 6.0% 3.4%

Figure 5.32 shows that the discovery of natural gas reserves increases as exploration activities or drillings are increased. Biogas in naturally formed in swamps, paddy fields and other places due to the natural degradation of other organic materials. As shown in previous sections, there are huge gas reservoirs including deep gas, tight gas, Devonian shale gas and gas hydrates, which are not yet exploited. The current exploration level is limited to shallow gas, which is a small fraction of the total natural gas reserve. Hence, by increasing the number of exploration activities, more and more reserves can be found which indicates the availability

of unlimited amount of fossil fuels. As the natural processes continue, formation of natural gas also continues for ever. This is applicable to other fossil fuel resources such as coal, light and heavy oil, bitumen and tar sands.

Figure 5.32 Discovery of natural gas reserves with exploration activities (From Islam, 2014). Figure 5.32 shows the variation of resource base with time starting from biomass to natural gas. Biomass is available on earth in huge quantities. Due to natural activities, the biomass undergoes various changes. With heat and pressure on the interior of the earth, formation of fossil fuels starts due to the degradation of organic matters from the microbial activities. The slope of the graph indicates that the volume of reserve decreases as it is further processed. Hence, there is more coal than oil and more oil than natural gas, meaning unlimited resources. Moreover, the energy content per unit mass of the fuel increases as the natural processing time increases. The biomass resource is renewable and the biological activities continue on the earth, the process of formation of fossil fuels also continues for ever. From this discussion, the conventional understanding of there being a fine boundary between renewable and nonrenewable energy resources is dismantled, and it is concluded that there is no boundary between the renewable and non-renewable in the long run, as all natural processes are renewable. The only problem with fossil fuels arises from the use of toxic chemicals and catalysts during oil refining and gas processing. Provided the fossil fuels are processed using the natural and non-toxic catalysts and chemicals, or make use of crude oil or gas directly, fossil fuel will still remain as a good supplement in the global energy scenario in the days to come. These resources are totally recyclable.

5.5 Petroleum in the Big Picture Captain Drake is usually credited to have drilled the first-ever oil well at Titusville, PA in the United States in 1859. However, even if one discards the notion that petroleum was in use for thousands of years, there is in actual fact credible evidence that the first well in modern age was drilled in the present-day Canadian province of Ontario (abbreviated today as ON), specifically near Sarnia, in the still British colony of Canada West. Canadian engineer Charles Nelson Tripp was the first in North America to have recovered commercial petroleum

products. The drilling was completed in 1851 at Enniskillen Township, near Sarnia, ON. Soon after the “mysterious gum” bed was discovered, the first oil company was incorporated in British North America through a charter issued under the authority of the British Parliament at Westminster. Tripp became the president of this company on December 18, 1854. The charter empowered the company to explore for asphalt beds and oil and salt springs, and to manufacture oils, naphtha paints, burning fluids. Even though this company (International Mining and Manufacturing) was not a financial success, the petroleum products received an honorable mention for excellence at the Paris Universal Exhibition in 1855. Failure of the company can be attributed to several factors. Lack of roads in the area rendered movement of machinery and equipment to the site extremely difficult. After every heavy rain the area turned into a swamp and the gum beds made drainage extremely slow. This added to the difficulty of distributing finished products. In subsequent years, James Miller Williams became interested and visited the site in 1856. Tripp unloaded his hopes, his dreams and the properties of his company, saving for himself a spot on the payroll as landman. The former carriage-builder formed J.M. Williams and Company in 1857 to develop the Tripp properties. Besides asphalt, he began producing kerosene. This ‘refined’ product, Kerosene, is a combustible hydrocarbon liquid. The name is derived from the Greek κηρός (keros) meaning wax. The word “Kerosene” was registered as a trademark by Abraham Gesner in 1854, and for several years, only the North American Gas Light Company and the Downer Company (to which Gesner had granted the right) were allowed to call their lamp oil “Kerosene” in the United States. In 1846 development of a process to refine a liquid fuel from coal, bitumen and oil shale. His new discovery burned more cleanly and was less expensive than competing products, such as whale oil. In 1850, Gesner created the Kerosene Gaslight Company and began installing lighting in the streets in Halifax and other cities. By 1854, he had expanded to the United States where he created the North American Kerosene Gas Light Company at Long Island, New York. Demand grew to where his company’s capacity to produce became a problem, but the discovery of petroleum, from which kerosene could be more easily produced, solved the supply problem. This was the first time in recorded history that an artificial processing technique was introduced in refining petroleum products. Gesner did not use the term ‘refined’ but made fortune out of the sale of this artificial processing. In 1861, he published a book titled: “A Practical Treatise on Coal, Petroleum and Other Distilled Oils,” which became a standard reference in the field. As Gesner’s company was absorbed into the petroleum monopoly, Standard Oil, he returned to Halifax, where he was appointed a Professor of Natural History at Dalhousie University. It is this university that was founded on pirated money while other pirates continued to be hanged by the Royal Navy at Point Pleasant Park’s Black Rock Beach as late as 1844.10 In the meantime, there was further parallel development in the present-day Canadian province of Ontario. In 1858, Williams dug a well in search of cleaner drinking water and came across oil at a depth of 15.5 meters. It became the first commercial oil well in North America, remembered as the Williams No. 1 well at Oil Springs, Canada West. The Sarnia Observer

and Lambton Advertiser, quoting from the Woodstock Sentinel, published on page two on August 5, 1858: An important discovery has just been made in the Township of Enniskillen. A short time since, a party, in digging a well at the edge of the bed of Bitumen, struck upon a vein of oil, which combining with the earth forms the Bitumen. Some historians challenge Canada’s claim to North America’s first oil field, arguing that Pennsylvania’s famous Drake Well was the continent’s first. But there is evidence to support Williams, not least of which is that the Drake well did not come into actual production until 28 August 1859. The controversial point might be that Williams found oil above bedrock while “Colonel” Edwin Drake’s well had located oil within a bedrock reservoir. History is not clear as to when Williams abandoned his Oil Springs refinery and transferred his operations to Hamilton. He was certainly operating there by 1860, however. Spectator advertisements offered coal oil for sale at 16 cents per gallon for quantities from 4,000 US gallons (15,000 L) to 100,000 US gallons (380,000 L). By 1859 Williams owned 800 acres of land in Oil Springs. Williams reincorporated in 1860 as the Canadian Oil Company. His company produced oil, refined it and marketed refined products. That mix of operations qualified Canadian Oil at the time as the world’s first integrated oil company. Exploration in the Lambton County backwoods quickened with the first flowing well in 1860. Previous wells had relied on hand pumps. The first gusher erupted on January 16, 1862, when struck oil at 158 feet (48 m). For a week, the oil gushed unchecked at levels reported as high as 3,000 barrels per day, eventually coating the distant waters of Lake St. Clair with a black film. There is historical controversy concerning whether it was John Shaw or another oil driller named Hugh Nixon Shaw who drilled this oil gusher; the newspaper article cited below identifies John Shaw. News of the gusher spread quickly and was reported in the Hamilton Times four days later: I have just time to mention that to-day at half past eleven o’clock, a.m., Mr. John Shaw, from Kingston, C. W., tapped a vein of oil in his well, at a depth of one hundred and fiftyeight feet in the rock, which filled the surface well, (forty-five feet to the rock) and the conductors [sic] in the course of fifteen minutes, and immediately commenced flowing. It will hardly be credited, but nevertheless such is the case, that the present enormous flow of oil cannot be estimated at less than two thousand barrels per day, (twenty-four hours), of pure oil, and the quantity increasing every hour. I saw three men in the course of one hour, fill fifty barrels from the flow of oil, which is running away in every direction; the flat presenting the appearance of a sea of oil. The excitement is intense, and hundreds are rushing from every quarter to see this extraordinary well. Historically, the ability of oil to flow freely has fascinated developers and at the same time ability of gas to leak and go out of control has intimidated them. Such fascination and intimidation continues today while nuclear electricity is considered to be benign, while natural gas is considered to be the source of global warming, all because it contains carbon – the very component nature needs for creating an organic product. Scientifically, however, the need for

refining stems from the necessity of producing clean flame. Historically, Arabs were reportedly the first ones to use refined olive oil. They used exclusively natural chemicals in order to refine oil (Islam et al., 2010). However, such use of natural chemicals is nonexistent in the modern day petroleum industry. When it comes to petroleum gas, it had been in use for millennia but only recent time ‘processing’ of such gas has been introduced. Natural gas seepages in Ontario County, New York state were first reported in 1669 by the French explorer, M. de La Salle, and a French missionary, M. de Galinée, who were shown the springs by local native Americans. This marks the debut of natural gas industry in North America. Subsequently, William Hart, a local gunsmith, drilled the first commercial natural gas well in the United States in 1821 in Fredonia, Chautauqua County, NY. He drilled a 27-footdeep well in an effort to get a larger flow of gas from a surface seepage of natural gas. This was the first well intentionally drilled to obtain natural gas. Hart built a simple gas meter and piped the natural gas to an innkeeper on the stagecoach route from Buffalo to Cleveland. Because there was no pipeline network in place, this gas was almost invariably all used to light streets at night. However, in the late 1800s, electric lamps were beginning to be used for lighting streets. This led to gas producers scrambling for alternate market. Shallow natural gas wells were soon drilled throughout the Chautauqua County shale belt. This natural gas was transported to businesses and street lights in Fredonia at the cost of $1.50 a year for each light (Website 1). In the meantime, in mid 1800’s, Robert Bunsen invented the “Bunsen burner” that helped produce artificial flame by controlling air inflow in an open flame. This was significant because it helped producing intense heat and control the flame at the same time. This led ways to develop usage of natural gas for both domestic and commercial use. The original Hart gas was produced until 1858 and supplied enough natural gas for a grist mill and for lighting in four shops. By the 1880s, natural gas was being piped to towns for lighting and heat, and to supply energy for the drilling of oil wells. Natural gas production from sandstone reservoirs in the Medina Formation was discovered in 1883 in Erie County. Medina production was discovered in Chautauqua County in 1886. By the early years of the twentieth century, Medina production was established in Cattaraugus, Genesee and Ontario Counties. Gas in commercial quantities was first produced from the Trenton limestone in Oswego County in 1889 and in Onondaga County in 1896. By the close of the nineteenth century, natural gas companies were developing longer intrastate pipelines and municipal natural gas distribution systems. The first gas storage facility in the United States was developed in 1916 in the depleted Zoar gas field south of Buffalo. By the late 1920s, declining production in New York’s shallow gas wells prompted gas companies to drill for deeper gas reservoirs in Allegany, Schuyler, and Steuben Counties. The first commercial gas production from the Oriskany sandstone was established in 1930 in Schuyler County. By the 1940s, deeper gas discoveries could no longer keep pace with the decline in shallow gas supplies. Rapid depletion and over-drilling of deep gas pools prompted gas companies in western New York to sign long-term contracts to import gas from out of state.

It took the construction of pipelines to bring natural gas to new markets. Although one of the first lengthy pipelines was built in 1891 – it was 120 miles long and carried gas from fields in central Indiana to Chicago – there were very few pipelines built until after World War II in the 1940s. Similar to all other developments in modern Europe, World War II brought about changes that led to numerous inventions and technological breakthroughs in the area of petroleum production and processing. Improvements in metals, welding techniques and pipe making during the War made pipeline construction more economically attractive. After World War II, the nation began building its pipeline network. Throughout the 1950s and 1960s, thousands of miles of pipeline were constructed throughout the United States. Today, the U.S. pipeline network, laid end-to-end, would stretch to the moon and back twice. The phenomenon of pipelining is of significance. Because of this, there has been tremendous surge in the corrosion control industry. Onondaga reef fields were discovered by seismic prospecting in the late 1960s. Seven reef fields have been discovered to date in southern New York. Today, the Onondaga reef fields and many Oriskany fields are largely depleted and are being converted to gas storage fields. This state of depletion was a result of a long production period and extensive hydraulic fracturing throughout 1970s and 1980s. These were considered to be tight gas sands. Recently, the same technology has made a comeback. The rapid development of New York’s current Trenton-Black River gas play is made possible by technological advances in threedimensional (3-D) seismic imaging, horizontal drilling, and well completion. The surge in domestic oil and gas production through ‘fracking’ emerges from technologies popularized in the 1970s. However, 3D seismic or multilateral drilling technology was not in place at the time. Figure 5.33 shows how natural gas production evolved in the state of New York throughout history.

Figure 5.33 Natural gas production history in New York state (from Islam, 2014). In this figure, the first spike relates to discovery of Devonian shale. That spike led to quick depletion. In early 1970s, production from ‘tight gas’ formations led to another more sustained spike in gas recovery. During that period, extensive hydraulic fracturing was introduced as a means for increasing productivity. However, it was not considered to be a reservoir production enhancement scheme. In 2000, at the nadir of the global oil price, yet another spike

took place in the state of New York. This related to the development of Trenton Black River field. This gas production scheme would lead to record gas production in that state in 2005. This spike continued and led the way to producing domestic gas and oil from unconventional reservoirs in USA. Today, production from unconventional gas reservoirs has taken an unprecedented turn. In 2013, production from shale gas, tight gas and coal bed methane accounted for domestic production surpassing imports for the first time in 30 years. Shale gas, tight oil, or other unconventional resources are found in many of the states that had already produced from conventional sources. Figure 5.34 shows the locations of these unconventional formations.

Figure 5.34 Locations of unconventional shale plays in lower 48 states (from Ratner and Tiemann, 2014). The primary recovery techniques from these shale plays involve multilaterals and intense hydraulic fracturing, now known as ‘fracking’. Two significant differences between fracking and old-fashioned hydraulic fracturing are: 1) fracking uses multistage fractures with horizontal multilaterals; 2) fracking uses artificial sands as well fluid. In 1997, based on earlier techniques used by Union Pacific Resources, now part of Anadarko Petroleum Corporation, Mitchell Energy, now part of Devon Energy, developed the hydraulic fracturing technique known as “slickwater fracturing” which involves adding chemicals to water allowing increase to the fluid flow, that made the shale gas extraction economical. These chemicals are both expensive and toxic to the environment.

5.5.1 Unconventional Oil and Gas Resources The notion that conventional gas and oil is miniscule compared to unconventional gas reserve is decades old. With it comes the notion that it is more challenging to produce unconventional petroleum resources. In addition, at least for petroleum oil, the notion that unconventional

resources are more challenging to the production process is prevalent. This notion is false. With the renewed awareness of the environmental sustainability it is becoming clear unconventional resources offer more opportunities to produce environment-friendly products that conventional resources. Figure 5.35 shows the pyramid of both oil and gas resources.

Figure 5.35 Moving from conventional to unconventional sources, the volume of the petroleum resource increases. On the oil side, the quality of oil is considered to be declining as the API gravity declines. This correlation is related to the processing required for crude oil to be ready for conversion into usable energy, which is related to heating value. Heating value is typically increased by refining crude oil upon addition of artificial chemicals are principally responsible for global warming (Chhetri and Islam, 2008; Islam et al., 2010). In addition, the process is inefficient and resulting products harmful to the environment. Figure 5.36 shows the trend in efficiency, environmental benefit and real value with the production cost of refined crude. This figure shows clearly there is great advantage to using petroleum products in their natural state. This is the case for unconventional oil. For instance, shale oil burns naturally. The color of flames (left image of Picture 5.2) indicates that crude oil produced from shale oil does not need further processing. The right image of Picture 5.2 emerges from burning gasoline and has similar colors to those of the left.

Figure 5.36 Cost of production increases as efficiency, environmental benefits and real value of crude oil declines (modified from Islam et al., 2010).

Picture 5.2 images of burning crude oil from shale oil (left) and refined oil (right). In addition, crude oil from shale oil is ‘cleaner’ than other forms of crude oil because of the fact that it is relatively low in tar content as well sand particles. Another crucial aspect is the fact that sulfur content or other toxic elements of crude oil have no correlation with unconventional or conventional sources. Also, heavier oils do not have more of these toxic elements and are not in need of refinement to be usable. Lighter crudes are considered to be easier and less expensive to produce only because modern engineering uses a refined version of crude oil, and all refining technologies are specially designed to handle light crude oil. If sustainable refining techniques are used, lighter or conventional oil offers no particular advantage over unconventional one and yet the volume and ease of production of unconventional are greater in unconventional resources. For natural gas, the quality of gas actually improves with unconventional resources. For oil,

the lighter the oil the more toxic it is considered. Whereas for gas, more readily available resources are less toxic. For instance, biogas is least toxic, whereas it is most plentiful. As can be seen in Figure 5.37, as one transits from conventional gas to coal bed methane (CBM) to tight gas and shale gas all the way to hydrates, one encounters more readily combustible natural resources. In fact, CBM burns so readily that coal mine safety primarily revolves around the combustion of methane gas. The processing of gas doesn not involve making it more combustible, it rather involves the removal of components that do not add to heating value or create safety concerns (e.g. water, CO2, H2S).

Figure 5.37 Current estimate of conventional and unconventional gas reserve (From Islam, 2014). Figure 5.37 shows how the volume of resources goes up as one moves from conventional to unconventional resources. In this process the quality of gas also increases. For instance, hydrate has the purest form of methane and can be burnt directly with little or no safety concern. At the same time, the volume of natural gas in hydrate is very large. The concentration of ‘sour’ gas components also decreases with abundance of the resources. Such a trend can be explained by the processing time of a particular resource. There is a continuity in nature that dictates that the natural processing increases both the value and global efficiency of energy sources (Chhetri and Islam, 2008). Figure 5.38 depicts the nature of volume of natural resources as a function of processing time.

Figure 5.38 Abundance of natural resources as a function of time. In this picture, ‘natural gas’ relates to petroleum products in a conventional sense. This figure shows that natural gas in general is the most suitable for clean energy generation. Within unconventional gas sources, there exists another correlation between reserve volume and processing time. In general, the processing time for various energy sources is not a wellunderstood science. Scientists are still grappling with the origin of earth or universe, some discovering only recently that water was and remains the matrix component for all matter (Pearson et al, 2014). Figure 5.39 shows how natural evolution on earth involved a distinctly different departure point not previously recognized. Pearson et al. (2014) observed a ‘rough diamond’ found along a shallow riverbed in Brazil that unlocked the evidence that a vast “wet zone” deep inside the Earth that could hold as much water as all the world’s oceans put together. This discovery is important for two reasons. Water and carbon are both essential for living organisms. They also mark the beginning and end of a life cycle. All natural energy sources have carbon or require carbon to transform energy in usable form (e.g. photosynthesis).

Figure 5.39 Water plays a more significant role in material production than previously anticipated (from Islam, 2014). The world petroleum reserve takes a different meaning if unconventional gas is added to the equation.

5.5.2 Gas Hydrates

It is well known that gas hydrates possess tremendous potentials as a source of natural gas. It is so vast that the estimates of global reserves are only sketchy, but range from 2,800 trillion to 8 billion trillion m3 of natural gas. This is several times higher than global reserves of 440 trillion m3 of conventional gas. While gas hydrate burns readily (Picture 5.3), exploitation of hydrate reserve is considered to be a difficult task.

Picture 5.3 Hydrate burns readily without any safety or environmental hazard. EIA (2017) global estimates place the gas volume resident in oceanic natural gas hydrate deposits in the range of 30,000 to 49,100,000 trillion cubic feet (Tcf), and in continental natural gas hydrate deposits in the range of 5,000 to 12,000,000 Tcf. Comparatively, current worldwide natural gas resources are about 13,000 Tcf and natural gas reserves are about 5,000 Tcf. The current mean (expected value) estimate of domestic natural gas hydrates inplace is 320,222 Tcf. In comparison, as of 1997 the mean estimate of all untapped technically recoverable U.S. natural gas resources was 1,301 Tcf, U.S. proved natural gas reserves were 167 Tcf, and annual U.S. natural gas consumption was about 22 Tcf. Large volumes of natural gas hydrates are known to exist in both onshore and offshore Alaska, offshore the States of Washington, Oregon, California, New Jersey, North Carolina, and South Carolina, and in the deep Gulf of Mexico. Most of the volume is expected to be in Federal jurisdiction offshore waters, although 519 Tcf of hydrated gas-in-place was assessed for onshore Alaska—more than three times the 1997 level of U.S. proved natural gas reserves. The USGS assessment indicates that the North Slope of Alaska may host about 85 TCF of undiscovered technically recoverable gas hydrate resources (Figure 5.40). According to the report, technically recoverable gas hydrate resources could range from a low of 25 Tcf to as

much as 158 Tcf on the North Slope. Total U.S. consumption of natural gas in 2007 was slightly more than 23 TCF. Of the mean estimate of 85 TCF of technically recoverable gas hydrates on the North Slope, 56% is located on federally managed lands, 39% on lands and offshore waters managed by the state of Alaska, and the remainder on Native lands. The total area comprised by the USGS assessment is 55,894 square miles, and extends from the National Petroleum Reserve in the west to the Arctic National Wildlife Refuge (ANWR) in the east (Figure 5.40). The area extends north from the Brooks Range to the state-federal offshore boundary three miles north of the Alaska coastline. Gas hydrates might also be found outside the assessment area.

Figure 5.40 Gas hydrate deposits of Alaska (From Islam, 2014).

Figure 5.41 Known and inferred natural gas hydrate occurrences in marine (red circles) and permafrost (black diamonds) environments (From Islam, 2014). Global estimates by the committee for gas estimates reported methane in gas-hydrate deposits to be in the range of 3.1 x 1015 to 7600 x 1015 m3 for oceanic sediments and from 0.014 x 1015

to 34 x 1015 m3 for polar regions (Max, 2003). Boswell and Collette (2011) puts this reserve to a volume of 1–120 × 1015 m3 of methane trapped within global reserve. In the near future, hydrates can alter the energy demography for the world. Such efforts are in progress in India (Sain, 2012), Japan (Pfeifer, 2014), and others. This has the potential of creating another energy revolution the likes of which didn’t occur in last 100+ years. Following is an estimate of currently known hydrate reserve for some of the leading countries: USA–318,000TCF Alaska North Slope – 590 TCF Japan – 1,765 TCF India–4,307 TCF Canada – 1,550 – 28,600 TCF A scientific approach to looking at natural gas resources of the world reveals that future of energy lies within exploiting unconventional gas. Unlike what has been promoted in recent decades, unconventional gas is more likely to generate truly ‘clean’ energy. Thankfully this fact is being recognized by future players of Figure 5.42 shows how there will be major shift in terms of usage of unconventional gas. This trend will shape the future of energy.

Figure 5.42 Future trends in some of the major future user of unconventional gas (from EIA report, 2013). In even the longer term future, gas hydrates offer the greatest promise of meeting energy needs. Gas hydrates are the most abundant of unconventional gas resources and they are the largest global sink for organic carbon (Figure 5.43).

Figure 5.43 Gas hydrates that are the largest global sink for organic carbon offer the greatest prospect for the future of energy (From Islam, 2014). At present, over 30 countries are actively involved at least in exploration of unconventional gas. For even the most active countries, this assessment of unconventional reserve is at its nascent state.

5.6 Science of Healthy Energy and Mass Scientific analysis of energy involves determination of energy source and content. Energy is known to be the cause of actions that are ubiquitous. Scientifically, every action and movement has a driver. Because every object is in motion, that driver is ubiquitous. New science has identified that sun as the ultimate energy source for the earth. While this conclusion is true, the premise that defines energy in New science is spurious (Islam et al., 2014). In this section, some of the scientific aspects of energy will be discussed. A conventional understanding of energy and the conservation of energy emerges from a discrete description of mass and energy. It assumes that mass exists independent of energy. This disconnection between mass and energy is rooted in cognition paradox introduced during in Ancient Greece and later readily adopted by the likes of Thomas Aquinas and other Churchinspired scholars.

5.6.1 Role of Water, Air, Clay and Fire in Scientific Characterization

Around 450 B.C., a Greek philosopher, Empedocles, characterized all matter into – earth, air, fire, and water. Note that the word ‘earth’ here implies clayey material or dirt it is not the planet earth. The origin of the word ‘earth’ (as a human habitat) originates from the Arabic word Ardha, the root meaning of which is the habitat of the human race or “children of Adam”), lower status, etc. Earth in Arabic is not a planet as there are other words for planet. Similarly, the sun is not a star, it is precisely the one that sustains all energy needs of the earth. The word ‘air’ is Hawa in Arabic is air as in the atmosphere. Note that ‘air’ is not the same as oxygen (or even certain percentage of oxygen, nitrogen, and carbon dioxide, etc.) – it is the invisible component of the atmosphere that surrounds the earth. Air must contain all organic emission from earth for it to be ‘full of life’. It cannot be reconstituted artificially. The term, ‘fire’ is ‘naar’ in Arabic that refers to real fire, as when wood is burnt and both heat and light are produced. The word has the same root as light (noor), which however has a broader meaning. For instance, moonlight is called noor, whereas sunlight (direct light) is called adha’a. In Arabic, there is a different word for lightning (during a thunderstorm, for instance). The final material ‘water’ is recognized as the source of life in every ancient culture. This water is not H2O. In fact, the H2O of modern science that is best described as the combination of atomic hydrogen and oxygen is a toxic product that wouldn’t allow the presence of any life. As the purity of H2O increases, its toxicity goes up and it becomes poisonous to any life form. The word ‘water’ in ancient cultures is best defined as the source of life. The Qur’an recognizes water as the essence of life as well as the source of all mass. In that sense, water is recognized as the first mass created. Such a beginning would be contradictory to the Big Bang theory that assumes hydrogen to be the first mass form. However, the Big Bang narration of nature is flawed and is not consistent with natural phenomena that does not show a synthesis of elements to form new materials, instead showing transformation and irreversible merger of particles, much like merger of two galaxies. This has been called the Galaxy model by Islam et al. (2014). In summary, water represents the imbedding of all other forms of material. For water to be the source of life, it must have all ingredients of a life form. Figure 5.44 shows how depriving water from its natural ingredients can make it reactive to the environment and render it toxic. This graph needs further clarification.

Figure 5.44 Water: a source of life when processed naturally but a potent toxin when processed mechanically. Water is a great solvent. It has the natural affinity to dissolve numerous salts and minerals that are necessary for life support. One can argue that every component necessary for life is in water. However, this can only occur in case of naturally occurring water. Water is routinely stripped off its solutes by the natural process of evaporation and subsequent precipitation through a series of highly complex and little-understood processes. However, this processing prepares water for collecting organic matter that is necessary for life support. This rainwater is pure (in the sense that it has little solute) but it is not toxic or harmfully reactive to the environment. As rainwater comes in contact with soil, it immediately triggers organic transformation of matters and life flourishes. As rain water penetrates the outer crust it picks up minerals and the water becomes even more balanced for human consumption. Another component of human consumption is the water should be free of organic matters as well as bacteria. As water filters through the soil, it becomes free from these harmful components. So, whenever naturally processed water either becomes useful for human consumption or it becomes useful for other living organisms that are part of a life cycle that includes humans. At later stages of natural processing, a balance is struck, as reflected in Figure 5.44. On the other hand, if the water is processed through artificial means (marked here as ‘mechanical’), various life-supporting components are removed and then replaced with toxic artificial components, many of whom are not identified. It is commonly believed that artificially ‘purified’ water has great affinity to absorb back any component from external sources. That is why such ‘pure water’ is used to clean semiconductors. For the same reason, this water becomes harmful to humans. If ingested, this water starts to absorb all the valuable minerals present in the body. Tests have shown that even as little as a glass of this liquid can have a negative effect on the human body. This process produces water of particularly high toxicity when reverse osmosis and nanofiltration is used. The World Health Organization (WHO) determined that demineralized water increased diuresis and the elimination of electrolytes, with decreased serum potassium concentration. Magnesium, calcium and other nutrients in water can help to protect against nutritional deficiency. Recommendations for magnesium have been put at a minimum of 10 mg/L with 20–30 mg/L optimum; for calcium a

20 mg/L minimum and a 40–80 mg/L optimum, and a total water hardness (adding magnesium and calcium) of 2–4 mmol/L. At water hardness above 5 mmol/L, higher incidence of gallstones, kidney stones, urinary stones, arthrosis, and arthropathies have been observed. For fluoride the concentration recommended for dental health is 0.5–1.0 mg/L, with a maximum guideline value of 1.5 mg/L to avoid dental fluorosis (Kozisek, 2005). A significant portion of essential minerals are derived from water. “Purified” water does not contain these essential minerals and thereby cause disruption to the metabolic process, thereby causing harm (Azoulay et al., 2001). When the residual components in ‘purified water’ contains toxins, such as the ones released from membrane during the reverse osmosis process, the process becomes particularly toxic, as shown in the lower half of Figure 5.44. Picture 5.1 shows the essence of natural processing of water. Formation of clouds through evaporation, rain, photosynthesis, filtration in the soil, and others form integral part of a life support system that is opposite to the mechanical system in every step of the way. It is also true that energy in an organic system emerges from water, just like life. As life cycle continues, mass transfer takes place simultaneous to energy exchange. By assigning zero mass to energy, this continuity is missed in the analysis adapted in New Science.

Picture 5.1 The role of natural processing in rejuvenating water is little understood by New Science in North America. This role was well understood elsewhere around the world as late as the Islamic era (7th to 17th century). a) Over Ontario [Canada] (23 May 2014); b) Over Potter, NE (20 May 2014). In all, the characterization credited to Empedocles and known to modern Europe conforms to the criterion of phenomena as outlined in the work of Islam et al. (2010) as well as Khan and Islam (2007). This fundamental criterion can be stated as not violating the properties of nature.

In fact, this characterization has the following strengths: 1) definitions are real, meaning have phenomenal first premise; 2) it recognizes the continuity in nature (including that between matter and energy); 3) captures the essence of natural lifestyle. With this characterization, nuclear energy would not emerge as an energy source. Fluorescent light would not qualify for natural light. With this characterization, none of the unsustainable technologies of today would come to existence. In the context of working out the systematic characterization of matter, the concept of fundamental substance was introduced by another Greek philosopher, named Leucippus who lived around 478 B.C. Even though his original work was not accessible even to Arabs who brought the annals of ancient Greek knowledge to the modern age, his student Democritus (420 B.C.) documented Leucippus’ work which was later translated in Arabic, then to Latin, followed by modern Greek and other European contemporary languages. That work contained the word ‘atom’ (άτομο in Greek), perpetrated as a fundamental unit of matter. This word created some discussion among Arab scientists 900 years ago. They understood the meaning to be ‘undivided’ (this is different from the conventional meaning ‘indivisible’ used in Europe in the post-Renaissance era. This would be consistent with Muslim scholars because they would not assign any property (such as indivisibility) that has the risk of being proven false (as in the case of the conventional meaning of atom). Their acceptance of the word atom was again in conformity with the criteria listed in Chapter 4, along with fundamental traits of nature. The atom was not considered to be either indivisible, or identical, or uniform, or any other commonly asserted properties described in the contemporary Atomic theory. In fact, the fundamental notion of creating an aphenomenal basis or unit is a strictly post-Roman Catholic Church European one. Arab annals of knowledge in the Islamic era, starting from the 7th century, have no such tradition (Zatzman, 2007). This is not to say they did not know how to measure. On the contrary, they had yardsticks that were available to everyone. Consider in this such a unit of time as the blink of an eye (tarfa) for small scale and bushel of grain from medium scale (useful for someone who does the milling of grains using manual stone grinders). The unit of matter was the dust particle (dharra means the dust particles that are visible when a window is opened to let the sunlight into a room – this word is erroneously translated as ‘atom’). As cited by Khan and Islam (2012) and Zatzman et al. (2007), using this phenomenal basis, Islamic scholars were able to advance knowledge to a great extent. For example, some one thousand years before Europeans were debating the flatness of the Earth, researchers of Caliph Al-Mamoon already knew the earth is ovoid. When the Caliph wanted to know the ‘circumference’ of the earth, he sent out two highly competent scientific expeditions. Working independently, they were to measure the circumference of the Earth. The first expedition went to Sinjar, a very flat desert in Iraq. At a certain point, on latitude 35 degrees north, they fixed a post into the ground and tied a rope to it. Then they started to walk carefully northwards, in order to make the North Pole appear one degree higher in the sky. Each time the end of the rope was reached, the expedition fixed another post and stretched another rope from

it until their destination was reached: latitude 36 degrees north. They recorded the total length of the ropes and returned to the original starting point at 35 degrees north. From there, they repeated the experiment heading south this time. They continued walking and stretching ropes between posts until the North pole dropped in the sky by one degree, when they reached the latitude of 34 degrees. The second of Almamon’s expeditions did the same thing but in the Kufa desert. When they had finished the task, both expeditions returned to Al-Mamoon and told him the total length of the rope used for measuring the length of one degree of the Earth’s circumference. Taking the average of all expeditions, the length of one degree amounted to 56.6 Arabic miles. The Arabic mile is equal to 1973 metres. Therefore, according to the measurements made by the two expeditions, the Earth’s circumference was equal to 40,252 kilometres. Nowadays, the figure is held to be 40,075 kilometres. How does this compare with the circumference of the earth as we know today? Today, it is known to be 40,075 km if measured through the equator, a difference of less than 200 km. This illustrates how powerful such a phenomenal basis was for conducting measurements and verifying theories. Heraclitus (540 B.C.) argued that all matter was in flux and vulnerable to change regardless of its apparent solidity. This is obviously a more profound view, even though, like Democritus, he lacked any special lab-type facilities to investigate this insight further, or otherwise to look into what the actual structure of atomic matter would be. It would turn out, the theory of Heraclitus would be rejected by subsequent Greek philosophers of his time. A less elaborate ‘atomic theory’ as described by Democritus had the notion of atoms being in perpetual motion in a void. While being in constant motion (perpetual should not mean uniform or constant speed) is in conformance with natural traits, void is not something that is phenomenal. In Arabic, the closest word to describe void is ‘cipher’ (the origin of the word decipher, meaning removing the zero’s or the fillers), which means empty (this word that has been in Arabic for over 1400 years was not used in the Qur’an). For instance, a hand or a bowl can be empty because it has no visible content in it, but it would never imply it has nothing it (for instance, it must have air or dust specks, dharra that become visible under the sunlight). The association of ‘cipher’ with zero was done much later when Arabs came to know about the role of zero from Indian mathematicians. One very useful application of zero was in its role as a filler. That alone made the counting system take a giant leap forward. However, this zero (or cipher or ‘sunya’ in Sanskrit) never implies nothingness. In Sanskrit, Maha Sunya (Great Zero) refers to the outer-space, which is anything but void as in nothingness. Similarly, the equivalent word is As-sama’a, which stands for anything above the earth, including seven layers of skies, only the first one being ‘decorated’ with stars. In ancient Greek culture, however, void refers to the original status of the Universe which was thought to be filled with nothingness. This status is further confused with the state of chaos, , another Greek term that has void as its root. The word chaos does not exist in the Qur’an as it is asserted there is no chaos in universal order that would not allow any state of chaos,

signaling the loss of control of the Supreme Authority. However, ‘nothingness’ is used in terms of creation (fatara, in Arabic) from nothing. It is not clear what notion Liucippas had regarding the nature of atomic particles, but from the outset, if it meant a particle (undivided) that is in perpetual motion, it would not be in conflict with the fundamental nature of natural objects. This notion would put everything in a state of flux. The mainstream Greek philosophy would view this negatively for its subversive implication that nature is essentially chaotic. Such an inference threatened the Greek mainstream view that Chaos was the Void that had preceded the coming into existence of the world, and that a natural order came into existence putting an end to chaos. As stated earlier, this confusion arises from misunderstanding the origin of the Universe.11 Even though this view was rejected by contemporary Greek scholars, this notion of nature being dynamic was accepted by Arab scholars who did not see this as a conflict with natural order. In fact, their vision of the Universe is, everything is in motion and there is no chaos. Often, they referred to a verse of the Qur’an (36:38) that actually talks about the sun as a continuously moving object – moving not just haphazardly but in a precisely predetermined direction, assuring universal order. Another intriguing point that was made by Democritus is that the feel and taste of a substance is a function of “atom” of the substance on the “atom” of our sense organs. This theory advanced over thousand years before Alchemists’ revolutionary work on modern chemistry was correct in the sense it supports the fundamental trait of nature. This suggestion that everything that comes into contact contributes to the exchange of “atoms” (ατομοσ) would have stopped us from making toxic chemicals, thinking that they are either inert (totally isolated from the system of interest) or their concentration is so low that the leaching can be neglected. This would prevent us from seeing the headlines that we see every day. This theory that could have revolutionized Chemical engineering 1000 years before Alchemists (at least for Europe, as Egyptians already were much advanced in chemical engineering some 6000 years ago) was rejected by Aristotle (384–322B.C.) who became the most powerful and famous of the Greek scientific philosophers. Instead, Aristotle adopted and developed Empedocles’s ideas of elemental substances, which was originally well founded. While Aristotle took the fundamental concept of fire, water, earth, and air being the fundamental ingredients of all matter, he added qualitative parameters, such as hot, moist, cold, and dry. Figure 5.45 shows Aristotle’s model for four fundamental elements of matter. This is the oldest form of phase diagram that can be found in Europe. This figure is in effect a steady-state model. The elimination of the time function made the diagram appear perfectly symmetrical, which is the essence of Atomism. Democritus is indeed most often cited as the source of the atomic theory of matter, but there’s a strong argument/likelihood that what he had in mind was a highly idealized notion, not anything based on actual material structure. For the Greeks, symmetry was believed to be good in itself and was largely achieved by geometric rearrangement of [usually] two-dimensional space. There is an ambiguity as to whether Greek atomists thought of atoms as anything other than an infinite spatial subdivision of matter.

Heraclitus’ major achievement – which also marginalized him among the other thinkers of his time, unfortunately – was his incorporation of a notion of the effects of time as a duration of some kind, as some other kind of space in which everything played itself out.

Figure 5.45 Aristotle’s four-element phase diagram (steady-state). On the matter of role of time sequence, and universal order, Heraclitus had a profound view that was considered to be a paradox and was rejected (Graham, 2006). Heraclitus wrote: “This world-order [kosmos], the same of all, no god nor man did create, but it ever was and is and will be: everliving fire, kindling in measures and being quenched in measures.” This would be the first case of Agnostic assumption of ‘self creation’ and/or everlasting nature of universe, conflating ‘infinity’ as a trait of creator with the trait of creation. In addition, he uses, for the first time in any extant Greek text, the word kosmos “order” to mean something perceived as “world.” He identifies the world with fire, but goes on to specify portions of fire that are kindling and being quenched. Although ancient sources, including Aristotle as well as the Stoics, attributed to Heraclitus a world that was periodically destroyed by fire and then reborn, the present statement seems to contradict that view, as Hegel also noticed. If the world always was and is and will be, then it does not perish and come back into existence, though portions of it (measures of fire) are constantly being transformed. This contradiction and paradox are erased if “world-order” is replaced with “universal order” and the creation of time12 is preceded to be before creation of everything else (known as ‘matter’ at later time). A consistent and non-paradoxical meaning emerges if the following sequence is used. Creator (Absolute time) created the Absolute plan (variable time) before creating everything as a function of time (Islam et al., 2014). It also resolves the conflation of the term ‘infinity’ with ‘never-ending’. While creator is infinity, creation is ‘never-ending’. Figure 5.46 is a depiction of what is sacred in European narration of philosophy. Even though this depiction is attributed to Roman Catholic Church, it is clearly related to features seen in ‘Absolute being’ in ancient Greek and later transformed into all aspects of scientific and mathematical cognition. Aristotle said, “The mathematical sciences particularly exhibit order,

symmetry, and limitation; and these are the greatest forms of the beautiful”. This can very well be the beginning of reversing science from long-term to short-term through mathematization. Linearization is the ultimate version of such mathematization.

Figure 5.46 Divinity in Europe is synonymous with uniformity, symmetry, and homogeneity, none of which exists in nature. This fascination with homogeneity, symmetry, and other ‘godly’ attributes and assigning them to creation is uniquely European (Livio, 2005). The orient is known to have a different description of nature. At no time did ancient India, Babylonia, or ancient China conflate the nature of God with nature of humans, even for Avatars. The most important “transformation” between creator and creation is through the notion of Avatar in India. An avatar is bound to change and is subject to all features of a mortal or natural objects. Figure 5.47 is a recasting of Figure 5.45 that introduces the concept of water as the source of life and fire as the end of life, the whole process being connected through periodicity of day and night. This was the model used by the scholars and Alchemists of the Islamic golden era (7th to 17th century).

Figure 5.47 Recasting Figure 5.45 with the proper time function. Note how water is considered to be the source of all mass that goes through natural processing during day and night, constantly giving rise to new life forms (e.g. photosynthesis, blossoming of flowers, sprouting of seeds) and ending with death, which then gives rise to ingredients necessary for new form of life. Therefore, both life and death were considered to be integral part of the universal order that follows a cyclic pattern, always conserving mass and energy simultaneously. Of importance is the notion that the Qur’an indicates that the word Ardha (Earth) is the habitat for mankind, who is made out of clay, the principal component of earth. This same earth also gives rise to plant lives that serve as the transformer of solar energy into biomass,. The word ‘air’ in Arabic is hawa, which also means life. This description and characterization of mass and energy is also typical of the Yin Yang concept that is known in the Korean peninsula for the entire recorded period of history. Figure 5.48 shows the depiction of fire and water in Yin Yang form. This figure shows the coexistence fire and water, the two main components of universe through asymmetrical pattern while keeping the complimentary nature of those components intact. The broader symmetry that is seen is more thematic than tangible.

Figure 5.48 Water and fire are depicted through taegeuk (yin yang). The central message of this picture is stated through the word of famous mythology professor, Joseph Campbell (1904–1974), who said, “The goal of life is to make your heartbeat match the beat of the universe, to match your nature with Nature.” This is a theme that is in the core of Islamic faith that describes religion as Deen (natural trait) and defines Muslim (root word salama that means surrender and peace) to be the one in tune with nature and natural order (Qadr in Arabic).

Figure 5.49 Korean national flag contains ancient symbol of creation and creator. This natural order is unique and manifested through a unique function of time. With this characterization, there cannot be any multiple history of the same particle of event. Therefore, one of the biggest problems of quantum theory doesn’t occur with scientific characterization of matter and energy. Note in Figure 5.48 how both fire and water parts contain global symmetry but have no local symmetry. Similarly, the flag of South Korea exhibits anti-symmetry, or symmetry with a property reversal (Figure 5.49). The white background is a traditional Korean color; it traditionally represents peace and purity. Note that this circle is not tangible and it signifies the never-ending (sustainable) nature of nature. It is anti-symmetrical because of the red/blue interchange, both in color and deeper meaning. The deeper meanings represent opposing functions of the universe. The most comprehensive meaning is the existence of tangible and intangible. These seemingly opposite or contrary forces are interconnected and interdependent in the natural universe that allows the existence of both in perfect harmony. Such harmony is in sharp contrast with European notion that says the universe is constantly

degrading or there is a continuous struggle between good and evil. The European notions emerge from a first premise that is similar to the dogmatic notion of ‘fall’ and ‘original sin’. Nature being a union of opposites in harmony comes from the first premise that Nature is perfect and balanced (Khan and Islam, 2012). Many natural dualities (such as light and dark, woman and man, day and night, high and low, hot and cold, fire and water, life and death, and so on) are thought of as physical manifestations of the yin-yang concept. This concept also reveals the fact that the duality is apparent and it is a matter of the observer’s perception, whereas it is absolute for the object. For instance, a blind person does not see the difference between night and day but that perception does not change anything about the night and day. This fact demonstrates the absurdity of Quantum theory that makes reality a function of perception. Reality cannot be a function of perception, unless there is no such thing as reality. According to Aristotle, one of the mistakes of Zeno in his paradoxes of time and motion is that he did not distinguish between actual and potential infinities. Scientifically, it is the continuity of the time function that eluded Zeno. Aristotle ‘remedied’ it by differentiating between actual (present) and potential (future). Then he asserted, “Everything is potential infinity and nothing is actual infinity.” This, in essence, reversed the notion of reality concept of Plato. Such absurdity does not occur in oriental cognition that considers the time function properly. Furthermore, all objects (matter and energy) and events (time) in the world are expressed by the movement of “yin” and “yang.” For instance, the moon is yin while the sun is yang; the earth is yin and the heaven (sky) is yang; a woman is yin and a man is yang; the night is yin and the day is yang; the winter is yin and the summer is yang; etc. Yin and yang are opposite, yet work in perfect harmony. This aspect is typical of Qur’anic philosophy as well as everything is reported to be created in pair (the Arabic word is zawj, as in spouse or couple). The duality in the Yin Yang is further highlighted in the eight markings, called trigrams or Pakua symbols, are opposites of one another diagonally (Figure 5.50). Broken lines are unbroken and vice versa. The trigram together represents the principle of movement and harmony. Each trigram (hangul: kwae) represents one of the four classical elements, namely, heaven, earth, fire, and water. The word ‘hangul’ represents heaven, while kwae means movement and harmony. Each “Kwae” consists of three bars of divination signs that can be either broken or unbroken bars. A broken bar stands for yin while an unbroken bar stands for yang. Numerous combinations are possible, but four basic elements correspond to heaven, water, earth, and fire.

Figure 5.50 Combination of various fundamental elements make up the rest of the creation (From Islam, 2014). If one replaces each unbroken line in the trigrams by 0 and each broken line by 1, one can see

that the symbols represent numbers in the binary (base two) number system.The symbols and their meanings appear below. The binary numbers are read from bottom to top (Figure 5.58). The top left corner of the flag is three unbroken lines and represent heaven (0). This is significant because both ‘heaven’ and 0 signify the origin. Note that ‘heaven’ is not the one that ‘righteous’ people end up (in theological sense), this is the heaven that originated the entire creation. In the English language, the use of the word ‘heaven’ is full of ambiguity that comes from Roman Catholic Church’s interpretation of origin of universe and men as well as the myths that purport the notion of ‘gods and angels’ residing in heaven. However, in Arabic language, such ambiguity doesn’t occur. The Creator does not reside in Jannah (literally meaning ‘garden’ and often translated as ‘heaven’) as He is not constrained by space or time. He is also the originator of everything. For instance, Quran (2:117) states: “Originator of the skies and the earth. When He decrees a matter, He only says to it, ‘Be,’ and it is.” The Arabic word for skies, Samawah, is often incorrectly translated as ‘heavens’, highlighting confusion of the English language. Equally important is the use of ‘zero’ to denominate the source of everything. This ‘zero’ can have two meanings, i.e., nothingness and origin. The nothingness would coincide with ancient Greek word, Caos (Chaos). However, this zero (‘sunya’ in Sanskrit and cipher in Arabic) never implies nothingness as in void. It rather implies the originator, who originated everything from nothing. This is consistent with Islam as well as notable oriental religions. For instance, in Sanskrit, Maha Sunya (Great Zero) refers to the outer space, which is anything but void as in nothing-ness and most often refers to the Creator. Interestingly, the Arabic word ‘cipher’, while recognized as the origin of the word ‘zero’, does not represent void, it rather refers to ‘emptiness’. In ancient Greek culture, however, void refers to the original status of the universe, which was thought to be filled with nothingness, in terms of tangibles, including time. Similarly, the equivalent word in Arabic is As-sama’, which stands for anything above the earth, including seven layers of skies, is not ‘heaven’, which is Jannah (garden) in Arabic and literally means ‘garden of paradise’. The Qur’an often refers to the Creator as the one ‘in the sky’ (e.g. Quran 67:16 says: Do you feel secure that He who is in the sky (sama’a) would not cause the earth to swallow you and suddenly it would sway?). Opposite to ‘heaven’ is the earth (designated by the number Seven), placed at the lower right corner of the flag. This placement as well as the number 7 are both noteworthy and contain deeper meaning. While the earth is known to be ‘just’ a planet in European science, it holds a much deeper meaning the Qur’an that defines humans as the Viceroy (khalifa) of the Creator (e.g. Chapter 2:30 of the Qur’an specifies man’s role as the viceroy), charged with law and order on Earth. The Arabic word for ‘earth’ is Ardha, which means ‘habitat for humans (the viceroy of the creator)’. This outlook is clearly different from the Eurocentric notions, ranging from the vastly discredited ‘original sin’ to widely accepted ‘evolution’ theories (McHenry 2009; Hall, 2008) that detach human conscience from its functioning in a society. Overall, they confused Creator’s traits with traits of creation. This confusion is many centuries old, as observed from the ‘scientific’ work of Thomas Aquinas. This immediately cut off the relationship between Creator and Creation. This involved confusion in understanding what is natural – a confusion that continues until today. Then, they confused creations (other than

humans) with humans due to there being a lack of logical premises defining the purpose of humanity. For them, humans are just another set of animals and the earth is just another planet. This latter confusion cut off conscience (and ownership of intention) from humanity. The ground became fertile for onset of various forms of aphenomenal cognition, some of which called themselves Naturalism, Agnosticism, Secularism, Atheism, etc. Water (number Five) is placed at the right top corner. The existence of water as a fundamental element is important. In every culture, water is synonymous with life and liveliness. The Qur’an places the existence of water before anything else. Opposite to water is fire (number Two) at the lower left corner. The role of fire is opposite to water, yet it is essential to life. This life forming fire comes from carbon, another essential, but often overlooked, component of life. Without fire of carbon, there is no carbon dioxide, the essence of plant, and therefore, life. Fire represents transition from cold to hot, from life to death, from tangible (water or liquid) to intangible (vapor or gas). This phase change is typical of creation. In fact, the very fact that everything is moving (a function of time) makes it essential to go through this phase of tangible and intangible. Overall, this continues in an eternal circle. Picture 2 shows how it is natural to have such dual characteristic in any object. It is also important to note that these two components are often opposite but complementary. In most cases, one of them represents tangible aspect whereas the other represents the intangible aspect. The next aspect of yin yang is the existence of such transition in everything at all scales. Figure 5.50 shows how a 24-hour clock of yin yang allows continuous transition as a cycle of life.

Figure 5.50 Evolution of Yin and Yang with time (from Islam, 2014). Table 5.4 shows the tangible and intangible nature of the Yin Yang. The Yin Yang shows contrast as well as interdependence. For instance, no matter is produced without energy and no energy is produced without matter. Water is needed for plant, which is then needed for fire. This logic also shows nothing is real unless it is part of the positive negative cycle. For

instance, fire without water is not real. That would explain why diamond cannot be set on fire even though it is made out of carbon. Similarly, the presence of mass would indicate the presence of energy. This would make the existence of zero energy and infinite mass an absurd concept, even though new cosmic physicists routinely tout that notion (Krauss, 2012).

Picture 5.3 Natural occurrence of yin yang structure. Table 5.4 The tangible and intangible nature of yin and yang (from Islam, 2014). Yin Yang tangible intangible Produces form Produces energy Grows Generates Substantial Non-Substantial Matter Energy Contraction Expansion Descending Ascending Below Above Water Fire Figure 5.51 also shows how the Yin and Yang encircle each other alternating as a continuous function of time. As time progresses, yin becomes yang and vice versa. This progression confirms the existence of characteristic time function for every object at every scale. Picture 4 shows the depiction of Yin Yang with relation to a mother. The mother here is represented by Time (as in time function) whereas time itself is surrounded by Absolute Time (Dhahr in Arabic), which is considered to be the trait of the creator in ancient Indian, Greek, as well as Qu’ranic traditions. This mother is significant as in ancient Hindu culture, the supreme God is symbolized by ‘mother’. In the Qur’anic narrative, the creator’s first two traits literally mean ‘womb that is infinitely continuous in space’ and womb that is infinitely continuous in time. The kittens here represent yin and yang, while the mother forms a yang yin with the father of the kitten. The father here remains intangible whereas the mother cat is tangible. Absolute Time

itself forms a yin yang within the same external object, i.e., the creator, whose other trait has been known to be Absolute Light (Noor in Arabic) since Ancient Greek time. Similarity within creation exists through matter (tangible) and energy (intangible). While the existence of these two components of nature is not controversial, New science has disconnected matter from energy by assigning zero mass to photons. The logic that without mass there cannot be any energy makes it clear that such disconnection is unwarranted. In addition, the notion of anisotropy should be understood in each of these relationships. For instance, Time is a function of Absolute time even but Absolute time is free from any dependence on time. In Picture 3, this fact is symbolized by the mother cat, whose movement is confined by the wooden structure surrounding it but the mother cat has no influence on the wooden structure. Similarly, the mother cat controls the kittens and restricts their movement, whereas the kittens have no control over the mother cat. Finally, the role of intangible must be understood. How Absolute Time affects or is affected by Absolute Light is unknown to us. Thankfully, it is not necessary to have that knowledge in order to characterize matter and time on earth. However, the role of intangible within the realm of kittens is manifested through the mother cat (tangible) and father cat (intangible). The father cat doesn’t affect mother cat’s persona but affects the nature of kittens. There is no reversibility nor there is any real symmetry. This would explain the absence of real symmetry and the presence of uni-directionality in nature. It would also explain why everything in nature is transient and unique function of Absolute time, including Time itself. It is, therefore, expected that every phenomenal object would have a characteristic time function, often identified as frequency, which varies with time. In demonstrating this notion of characteristic frequency which is also a variable, our solar system offers an excellent example.

Picture 4 Depicture of Absolute Time, time and Yin Yang in nature. In the solar system, the moon is moving around its own axis, then around the earth, while keeping pace with the earth that is orbiting around the sun and keeping pace with the sun that is moving both around its own axis as well as around an unknown object. Figure 5.51 shows a snap shot of the solar system. In this system, the Earth, the moon and the sun all are moving in many directions, but with variable/non-uniform motion. In a short span, the frequency may appear to be fixed or constant, but in reasonably larger time span, they vary. This is true for every object, including humans and human body parts (Islam et al., 2014). It is reasonable to assume that such dependence of size on orbital speed reverses for invisible elements. The

orbital speeds of various known objects are plotted in Figure 5.51 as a function of size, along with a reverse relationship for invisible particles. If a similar model is followed for invisible structure, smaller than dust speck, the following figure emerges. In this figure, dust speck (dharra in Arabic) is identified as the smallest object This is in line with the Avalanche theory, recently advanced by Khan et al. (2008) and Khan and Islam (2012). From there, arises natural characteristic speed in the following scale.

Figure 5.51 Sun, earth, and moon move at a characteristic speed in infinite directions. In Figure 5.52, a dust speck represents reversal of speed vs. size trend. For so-called subatomic particles, speed increases as the size decreases. The Higgs-boson is assigned a smaller value than quark but larger value than photon. This is done deliberately in order to float the notion that fundamental particle and finality in determining such particle is a spurious concept. Note that the actual speed in absolute sense is infinity for smallest particle. It is because each element has a speed in every dimension. This dimensionality is not restricted to Cartesian coordinate. As the number of dimension goes up, so does the absolute speed, approaching infinity while projected in absolute scale. The characteristic speed also increases as the size of the entity goes down. For infinitely small entity, the speed would approach infinity. This analysis shows how both small and large scales are in harmony with infinitude, associated with ‘void’. In the pre-Thomas Aquinas period, such a ‘void’ was synonymous with the creator within whom all the creation was believed to be embedded. Table 5.5 shows some of the characteristic speeds (and thus, frequencies) of various particles. Note that these characteristic speeds are all a function of time.

Figure 5.52 Orbital speed vs size (not to scale).

Table 5.5 Characteristic frequency of “natural” objects (from Islam, 2014). Object

Nature of Average speed speed sun

Comment

Sun

Orbital

240 km/s

Drfit Spinning

19 km/s

Around unknown object, that’s 2.55 1020m away; estimated orbital time 200 million Due to variation in galactic rotation Unlear

240 km/s 30 km/s 0.44 km/s 240 km/s

to match with the orbital speed of the sun Around the sun At equator to keep up with the sun

30 km/s 1 km/s 12 km/s Unknown

To keep up with the earth To keep the same face exposed to one side Rigid ball assumption

2,200 km/s

under non-excited conditions (Bohr model, uniform speed assumption)

Unknown

rigid ball assumption

Unknown 300,000 km/s 300,000 km/s

Non-measurable dimension rigid ball assumption

Earth

Escape Orbital Spinning Moon Broad escape Escape Orbital Spinning Atom, radius 10–9 m Electron, 10–15 m Proton, 3. 10–15 m Quark Photon Higgs-Boson

rigid ball assumption

Furthermore, Figure 5.53 shows that there is no object in steady state. It is also true that there is no object at uniform motion. This arises from the original premise that time itself is a variable. As a consequence, characteristic speed of each object changes with time. Such characteristic time exists for every object. There is a quantum change characteristic features during phase transfer or when a life begins (from non-organic to organic) or ceases for an individual (from organic to non-organic). In this process, life and death are triggers or bifurcation points as associated time functions change drastically. It should be noted that such transition is subjective and death of one entity only means death for that particular object. Here, it is the time function, f(t) that defines the pathway of any entity, within the universal order. This time is not arbitrary and is tightly controlled by the external entity, the Absolute

Time, as presented latter in this chapter. In Figure 5.52, dust specks represent the most objects closest to stable and steady state. All ancient cultures, culminated in the Quran, consider that humans are created from clay or dust specks. As shown earlier, earth or clay is an integral part of organic systems that constitute the habitat for humans. Following is a list of some of the characteristic time as related to humans: Earth: day and night, year, pace with the sun Humans: blink of an eye, sunrise, mid-day, sunset, sleep and wake, week, month, menstrual cycle; 40 days, 40 years Society: 40 years, centuries, millennia Geology: millennia Cosmos: billion years The heart rate is typical of natural frequency of humans. Even though the heart rate is frequently talked about in the context of both physical and psychological conditions, brain waves are also characteristics of human activities (Figure 5.53). Change in brain waves is evident during sleep, alertness, meditation, etc. Little information is available as to how such frequencies can affect overall human conditions, whereas most focus has been on how to alter natural frequencies. What makes it complicated is scientists have little knowledge of how they naturally vary with time as a person ages. Clearly, humans are not in control of their brain waves, thereby consolidating the theory that humans are integral part of the universal order and their degree of freedom lies only within their intention.

Figure 5.53 The heart beat (picture above) represents natural frequency of a human, whereas brain waves represent how a human is in harmony with the rest of the universe (From Islam et al., 2015). Also, heart beats themselves change over time. As a sign of every characteristic frequency

itself being a function of time, the graph in Figure 5.54 is produced. In this, the data on puberty and older people are extrapolated from published papers (e.g. Larson et al., 2013). Naturally, children are more dynamic and their body parts are renewed faster. This would necessitate faster replenishment of energy. The idea is to nurture natural frequencies, rather than fighting them. New science does the opposite and every ‘treatment’ is aimed at altering natural frequency, thereby countering forces of nature.

Figure 5.54 Maximum and minimum heart rate for different age groups (From Islam et al., 2015). There is what is characteristic but there is also fundamental. It is hard to believe or accept, for example, that other natural frequencies in one’s body are unrelated to heartbeat frequency. Thus, it would be difficult to believe that an individual’s brainwave frequency, for example, could be entirely accounted for from investigation of phenomena occurring within the cerebral cortex alone by itself. Our theory is: we have no control over these frequencies. What we have control over is our intention. Thankfully, that does not affect the universal order but doubly thankfully does affect an individual’s long-term future. This connection of human intention with long-term future as well as disconnection of the universal order from human intervention is a notion that has been absent in every scientific cognition of post-Roman Catholic church Europe. The exchange between two dual and opposite objects continues and it is reasonable to assume that there is no distinction between energy particles and mass particles. The circle in the broader portion of the yin yang represents the same shape that would house another yin yang, which itself will house another set. Such trend continues until we reach a state that can be characterized as interface between Absolute light (or Absolute time), which is the external element. This is consistent with pre-Newtonian narration as well as the Qur’anic narration of divine traits. Figure 5.55 shows the depiction of such infinitude. Figure 5.55 shows how ‘pure light’ that surrounds both tangible and intangible surrounds everything at all times. Such is the connection between time, the creation and the Absolute time, and ‘radiative light’ and pure light. Both absolute time and pure light represent infinity.

Figure 5.55 Tangible/intangible duality continues infinitely for mega-scale to nanoscale, from infinitely large to infinitely small. Furthermore, the notion of male contributing to the life of female and female in turn giving birth to male becomes an integral part of life cycle of humans. With this cyclic balance, there is no superiority of any particular entity over other as long as they belong to the group of creation in general. In addition, every object is in perfect harmony with nature, except humans that are non-ideal. This ‘non-ideal’ feature has been a crucial matter of contention in European history, ever since the concept of original sin was introduced. If that premise is removed then the premise that everyone is born perfect is consistent with the premise that Nature is perfect. European dogma science defined Jesus Christ as the perfect man (a role model), but that is not consistent with his dual status as ‘son of god’. In addition, very little is known about this ‘role model’. In fact, modern scientists doubt he ever existed in the role of a messiah. European modern science does not define perfect human, abandoning the debate as ‘religious mumbo jumbo’. In addition, it does not define good behavior or any purpose for humans, other than maximizing pleasure and minimizing pain. This is in contrast to Christian dogma but equally problematic as it gives rise to chaotic rules, akin to a roller coaster ride (Islam et al., 2013). In summary, both ancient oriental and Greek philosophers support the notion of separation of creator (external entity) from creation (internal entity), each of whom have completely different traits. The creation itself is further divided into tangible and intangible sets with opposite traits, each having continuously decreasing size and similar asymmetry (or duality). The Qur’an (that was compiled in mid 7th century) distinguishes Absolute light (or pure light, Noor in Arabic), which is the Creator’s trait (in line with being omnipresent, Absolute Guide, Absolute womb, etc.) from radiative light (adha’), which is associated to the movement of any entity. Only pure light (PL) is continuous in space and time. Everything else is embedded in PL.

This ‘light’ is not a collection of photons or any particle. It is pure energy and has no mass associated with it. The conventional presentation of light as a collection of photons each of zero mass fails to account for the fact that the sun is losing thousands of tons of mass every second. With the Qur’anic denomination of pure light, as distinct from radiative or reflective light, there is no contradiction in transition from mass to energy. This distinction between pure light (PL) and radiating or reflective light (RRL) is necessary but no sufficient condition to truly scientific description of mass and energy. Sequential functionality must be described as a sufficient condition. This requires proper characterization of mass and energy. This depiction is supported by Greek philosophers as well as Augustine of the Roman Christian Church.

Picture 5 Pure light contains within itself both tangible and intangible.

Picture 6 The closest analogy of pure light is the womb that surrounds the fetus and nourishes it. The Qur’an uses the similitude of a womb in describing how Creator sustains the entire creation. The relationship here is entirely non-symmetrical and non-reciprocal. A fetus is in need of the womb and cannot exist without the protection and nourishment of the womb for the entire duration of the sojourn. In this case, the material existence of the fetus is the tangible and time is the intangible, whereas spatial existence of the womb is the tangible and temporal

protection thereof is the intangible feature of the womb. This is reflected by the terms ArRahman and Ar-Raheem, the first and second traits of the creator mentioned in the Qur’an. The universe is the physical presence of the fetus, whereas universal order is the life span of the fetus. Just like the fetus goes through stages and moves from a drop, then to a blood clot, then to full grown fetus, ready to enter a new phase of life cycle. 1 Someone would ask, how is that different from David Suzuki’s approach or the universally

acceptable 3R’s approach? It is very different because it advocates for usability for an infinite amount of time as opposed to waste minimization which only buys time. 2 In the coarse language of The New Yorker’s John Cassidy, Trump is saying, “screw you to the

world” in order to implement “his maniacal, zero-sum view.” 3 The authors first went after the connections between peak oil theory and the unfulfilled

predictions of the Rev. Thomas Malthus back in (Zatzman & Islam, 2007). At that time, we pointed out: “In 1798 Thomas Malthus published his Essay on Population. This asserted that population must always and everywhere expand to outstrip the capacity of societies to feed themselves. This has been repeatedly disproved everywhere – in developing countries as well as developed countries. Nevertheless in 1968, Paul R. Ehrlich published his work The Population Bomb, reiterating the same thesis with fancier computer projections. The first country that his model predicted would collapse calamitously was the People’s Republic of China, the second was India. [Much of] the steady rise in the world oil price since 2004 [until the downturn in late 2014] is being ‘blamed’ on China and India raising their level of consumption to the level of more developed countries of Europe and the Americas. There is however no longer any serious talk or threat of their population growth – which is still large in both absolute and relative (percentage) terms relative to any other part of the planet – overwhelming the ability of their economies to feed their population. These doom-laden predictions lack any foundation anywhere in engineering practice or scientific discourse. As far as any notions about raw materials in general being in finite supply, technological breakthroughs have continually been finding new ways to make or do more per unit output of products or finished goods using less energy and-or less raw material per unit input. The reality of these technological revolutions has repeatedly refuted all previous claims in every other field that there are ‘limits to growth’ beyond which human existence or social or progress cannot be sustained. In the last twenty years, the elaboration of cost-effective and profitable means for exploiting the extensive so-called ‘unconventional reserves’ of oil — like the oil sands of western Canada — has completely turned upside down the notion that the world’s lights must go out when the last barrel of oil has been pumped in Saudi Arabia, Libya, Iraq or Iran. Where Malthus imprudently asserted that population must grow exponentially while food production could at best be increased only arithmetically, the work of Lord Boyd-Orr’s team at the UN Food and Agricultural Organisation in the decade following the end of the Second World War, carrying on from his own classic pre-war investigations, as a professional nutritionist, of Scottish (Boyd-Orr, 1937) and English (Boyd-Orr, 1943) diet among the working classes, decisively refuted all notions that there was anything like a finite capacity for food production relative to any

actual rate of population increase recorded anywhere on the planet. Hence, such repeated predicting followed by the failure of reality to meet the prediction suggests that the activity of such prediction itself lacks any rational basis. It is a prejudice feeding the formation of yet another ‘devil theory of history’: guess-who will be blamed as things fail to go so well for countries that presently think they are “on top” in world rankings … ‘Peak oil’ can thus only be understood as the latest attempt to prepare yet another devil theory of history. People will be blamed for consuming too much: governments were ignorant, corporations became excessively greedy, people became desperate … and the world went to hell. Apparently delivering another set of Cassandra-like warnings of impending doom, the proponents of the theory of peak oil and its purported consequences are also messing with people’s ability to sort anything out rationally or scientifically. That is, they are turning prejudice into disinformation. Petroleum being the basis of plastics and much else, this dissemination of disinformation is exercising the most paralyzing effect on developing and researching appropriate solutions to contemporary problems in all aspects of life.” This leads to oscillatory behavior in the human population profile over a long-term scale. These assumptions will be discussed later in this section. 4 Whatever quality of life means. Stiglitz and others challenge the use of quality of life to refer

to economic issues, but also want to include political (*cough* *liberal democracy* *cough*) and other ‘happiness’ indicators. 5 Ironically, as this is being written in mid-2018, “never is a heard a discouraging word” about

present and future US oil supplies. On the contrary, a number of questionable claims make their appearance daily in both the general news and the more specialized energy industry media about the United States having allegedly reached the cusp of becoming a major world exporter of oil and gas based on what is being generated on its own territory by the widespread application of hydraulic fracturing technology. This began in early 2014, while world oil prices still topped US$100 in Europe and the United States, and it persists to the present moment when the world oil price hovers between US$58-62 per barrel after collapsing below US$50 late in 2014. Unconventional oil and gas affected U.S. oil and gas production in an unprecedented manner. In 2012, U.S. oil production grew more than in any year in the history of the domestic industry, which began in 1859, and was set to surge even more in 2013. Daily crude output averaged 6.4 million barrels a day in 2012, up a record 779,000 barrels a day from 2011 and hitting a 15-year high (Fowler, 2013). It is the biggest annual jump in production since Edwin Drake drilled the first commercial oil well in Titusville, Pa., two years before the Civil War began. 6 The production cost of hydraulically-fractured oil and gas was estimated in 2014 data from

the Energy Information Administration of the United States Department of Energy at around US$85 per barrel-equivalent. Clearly this is economically unsustainable in the conditions of the current deep slump in the world oil price. Many medium-sized banks in the U.S. are worried about going bankrupt if those of its customers who invested in allegedly investment-grade bonds offered by many hydraulic fracturing start-ups through those banks

(back when the oil price was well above US$100 per barrel-equivalent, of course) decide to dump their dubious investment. 7 At the time King Hubbert was working on his Peak Oil theory, enhanced oil recovery (EOR)

was not yet even a gleam in any petroleum engineer’s eye, let alone a practical reality. The reality of EOR opens a possibility that a dormant well long believed to have been played out can be profitable revived at some time in the future. Some market analysts take this into account and some dismiss it as a marginal contribution to the overall profit picture. 8 This is discussed in depth in (Zatzman & Islam, 2007). 9 As the political movement for the reform of the living and working conditions of the

organized workers across Europe grew large and rapidly and entered the political mainstream, with many governments following the German and British examples of accommodating some of this movement’s main demands piecemeal, a number of their comrades in the political struggles on the European continent back in the middle third of the 19th century, led by Karl Kautsky in Germany supported from Tsarist Russia by Georgii Plekhanov, would eventually scrap and even oppose the very idea of a dialectical and historical materialist method as “dangerous” and “destabilizing.” The Russian political activist and leader of a politically-conscious section of the Russian workers’ movement Vladimir Ilyich Ulyanov, also known as Lenin, however, stood in the way of this more or less deliberate defanging of the revolutionary implications of dialectical and historical materialist method and rescued the revolutionary core of this method in a series of remarkable political pamphlets including What Is To Be Done? (1902), Imperialism the Highest Stage of Capitalism (1916) and State and Revolution (1917). 10 A cairn in front of its Administration building actually describes the university’s origins two

centuries earlier from a fund created to launder the ill-gotten gains of an early 19th century war crime committed by the Royal Navy against a customs house in the U.S. state of Maine several months after Anglo-American hostilities of the War of 1812 had officially concluded. 11 This confusion continues today when Quantum theory assumes that numerous universes are

created from nothing (Hawking, 2010). 12 This time is a creation and is transient (changing).

Chapter 6 The Islamic Track of Economical Analysis 6.1 Introduction In the post 9/11 terror attack ‘war on terrorism’ era, Islam has rarely been talked about in a positive vein. The only exception has been the so-called Islamic banking system. In 2008, Islamic banks were the only ones that remained unaffected by the financial crisis that wiped out trillions of dollars from the global coffer. In Chapter 2 and 3, we have seen some remarkable aspects of Islamic systems, similar to what we saw in education (Islam et al., 2013) and in technology developments (Chhetri and Islam, 2008; Khan and Islam, 2012; 2016; Islam et al., 2010; Islam et al., 2015, etc.) These aspects are the acceptance of gold as the only standard, good intention (commensurate with long-term welfare) being the only individual and collective starting point, long-term being the primary criterion, and zero-waste being the only mode of technology usage. In this Chapter, we look at the Islamic track of economical analysis. This is the tract that was identified by Ibn Khaldun as the Caliphate model – the model that captures the essential balance of humans and the environment. As the ‘Islamic banking’ system made its way through to Europe and North America, there has been renewed interest in learning about the rules outlined in Islamic jurisprudence (Zatzman and Islam, 2007). The turmoil in the US and Europe and the ongoing global economic and monetary crisis has added to this interest. Capitalism based on the interest-based fiat monetary system is indeed in a spiralling-down mode and the world is on the lookout for possible solutions to the crisis. Returning back to gold as the international monetary standard has been one suggestion from some quarters. In the case of Islamic economics, the call is to go back to the Islamic gold dinar that was the monetary standard of Shari’ah throughout Islamic history till the fall of the Ottoman Empire in 1924. Islam is derived from the Arabic root “Salaam,” which means peace, submission, and desertion (i.e., of one thing for another). In the religious sense, Islam means submission to the will of God and obedience to His law. In accordance with the above definition, a Muslim is the person who surrenders his will to the will of God. The root of the Islamic financial system is the model that was instituted by prophet Muhammad and who are known as the ‘Rashidun’ (rightly-guided) Caliphs, who ruled one of the largest ‘empires’ of history for approximately 40 years. Even though the model period ended after 40 years, Muslim rulers continued to use Qur’anic guidance in financial matters, irrespective of what political system they had. The ‘dirham’ (silver) and ‘dinar’ (Gold) were the only currencies used for everyday transactions from the time of Prophet Muhammad until the end of the Ottoman Empire in the early 20th century. Even though the concept of a promissory note was in place in the post Rashidun Caliphate period (starting with the Umayyad dynasty), it was not anything like the fiat currency that is in use and was limited to specific large transactions, equivalent to today’s

letters of credit, which are used in large trading transactions. According to Islamic law, the Islamic dinar is a coin of pure gold weighing 72 grains of average barley (Ibn Khaldun, 2012, p. 334). Modern determinations of weight range from 4.44 grams to 4.5 grams of gold, with the silver Dirham being created to the weight ratio of 7:10, yielding coins from 3.11 to 3.15 grams of pure silver. Reportedly, it was the second Caliph of the Rashedun Caliphate, Umar Ibn alKhattab, who established the known standard relationship between them based on their weights: 7 dinars must be equivalent (in weight) to 10 dirhams (Ibn Khaldun, 2012, p. 334). There is another factor to be considered. Since the role of the dinar is simply as a measure of value that depends on the gold-content and if zakat (obligatory charity) is based upon a 1year’s provision of foodstuff, then the 3 ounces of pure gold and 21 ounces of pure silver, as those were used to determine zakat worthiness during the Prophet’s era, is also a standard. This puts the gold-silver ratio to 7 (Hasan-uz-Zaman, 1991, p. 344). Undoubtedly, the interest in gold dinar among the public, academics, business community and even governments has increased lately. The gold dinar is generally agreed upon as a 4.25 gm gold coin, based upon the Roman solidus that circulated during the times of the Prophet. The gold dinar forms the monetary standard for the Shari’ah rulings on muamalat, zakat, hudud and mahr. Nonetheless, there is difference is opinion among the proponents of the gold dinar on the purity and weight of the coin – should it be made of 22K gold or 24K fine gold? Should it be 4.25 gm or more than that?. Twenty four karat (24K) gold is fine gold, by today’s standard it is 99.99 percent pure. The 22K accordingly contains 91.66% gold, hence known as 916 gold. Due to the rounding, some gold dealers make it to be 917, which means that in one thousand parts, 917 parts is gold while the rest is some other metals, normally silver or copper. The question whether the gold dinar is of fine gold or not is important because it is the Shari’ah standard and even the zakat, the fifth pillar of Islam, is based on it. One question arises is what purity of gold should be considered a standard. This question should not be difficult to answer because the gold dinar had been a historical standard among Muslims for centuries. It is not a modern innovation or theoretical construction. Hence to answer the above question, one simply has to go back to history, particularly the time of the Prophet Muhammad. The Prophet Muhammad is reported to have said: “Volume is to be measured according to the system of the people of Al-Madinah, and weight is to be measured according to the system of the peole of Makkah.” (Nasa’i, 2007, Vol. 5, Book 44, Hadith 4598). It turns out that the Muslims did not mint the gold dinars yet. In fact the first Islamic gold dinars were not minted until about half century after the demise of the Prophet, by the fifth Umayyad caliph Abd al-Malik ibn Marwan in the year 697CE. The Prophet accepted the Roman Byzantine gold solidus, also known as the bezant, as the monetary standard for Muslims. In this is wisdom. It was this coin that circulated among the Arabs for decades before the Muslims minted their own coins. Since the gold coin of the Eastern Roman Byzantine Empire, the solidus, was the coin accepted by the Prophet and was circulating among the Muslims. Islamic gold dinars minted by the later Muslim rulers would follow this standard.

6.2 Function of Gold Dinars and a New Paradigm for Economic Analyses Gold dinars played the role of money in the Islamic era (Picture 6.1). Hence, it eliminated problems generally associated with barter trades, like double coincidence of wants and the problem of divisibility discussed thoroughly in chapter 3. However, as money, it also enabled people to specialize in whatever they do best and hence increased productivity, output and trade; and thereby increased the standard of living of the people. Hence among the most important function of the gold dinar as money was as a stable measure of value. This feature of gold was highlighted by Aristotle, as we have seen in previous sections. By this, people are able to exchange goods and services in a just manner and able to save for future consumption and investments, transact in credit and repay debt in future.

Picture 6.1 Dinar and dirham represent gold and silver coins, respectively. In formalizing Islamic economics and finance, Ibn Khaldun, the Father of Social Sciences, played an important role (Islam, 2018). In his famous book, Al-Muqaddima Khaldun writes on the dynamics of civilization or umran, which is considered as the foundation of sociology. Economics being the driver of social structure, Khaldun’s work is also a treatise on Islamic economics such as economics (Oweiss, 1988). In economics, Ibn Khaldun’s work covers almost every foundation of modern economic thought, ranging from microeconomics to international trade. It is no surprise that when one finds almost one third of his Muqaddima consists of socio-economic concepts. He wrote the concepts in such a way that they are interconnected to one another, that one would not understand a concept without knowing basic ideas about others. It is similar to Aristotle’s approach to economics, except Ibn Khaldun had a solid reference to his work, viz., the Qur’an and Hadith. As such, the scientific cognition axis was fool-proof (Islam, 2018). This is typical of the Islamic scholars of the time (Islam et al., 2013). Ibn Khaldun wrote in the context of gold standard,

“God created the two mineral ‘stones,’ gold and silver, as the (measure of) value for all capital accumulations. (Gold and silver are what) the inhabitants of the world, by preference, consider treasure and property (to consist of). Even if, under certain circumstances, other things are acquired, it is only for the purpose of ultimately obtaining (gold and silver). All other things are subject to market fluctuations, from which (gold and silver) are exempt. They are the basis of profit, property, and treasure” (Ibn Khaldun, 1958, p. 480). Coins of the time of Abd al-Malik ibn Marwan, unearthed by archeologists, have the weight of the dinar at about 4.25 grams, matching the weight of the worn solidi that circulated in those areas. One could attribute this slightly lower purity of the first Islamic gold dinar compared to the Roman coin to the fact this was the first attempts of Muslims to mint their own coins and hence their relative inexperience in the refining and minting technology compared to the Romans who had been doing this for centuries. However, undoubtedly the intention was to get a coin as pure gold as possible. As the Islamic empire expanded and trade flourished, it must have become apparent that the gold dinar was less in weight compared to the Roman solidus. The Caliph Umar ibn Abd alAziz alerted that the dirhams of Abd al-Malik ibn Marwan were at 7:10.5 to the mithqal instead of the standard at 7:10. Hence he corrected the matter and issued, in 99H/717CE, silver dirhams and gold dinars of weight 3.15 gm and 4.5 gm respectively, i.e. similar weigh to the Roman solidus, i.e. 4.5 grams. Historical evidences show that by the time of the Fatimid Dynasty in Egypt, dinars of fine gold were already in circulation.

6.2.1 The Standard Weight of the Islamic Gold Dinar Determining the standard weight should be easy but rather challenging. It is easy if we have a previously established standard, but it is difficult if we have to invent a new one. One cannot rely on coins unearthed by archeologists entirely in this regard because unearthed coins generally would have experienced some wear and tear depending on how long they had been in circulation and also due to some variance in the weight of individual coins themselves. Some could have been tempered through clipping and so forth. Hence, it is best we resort to the definition of the coins as determined by the issuing authorities, like in this case the Byzantine Empire. It is obvious that the Islamic gold dinar is based on Constantine’s Roman solidus which was struck 72 to the Roman Byzantine pound (litra) used for gold measurement. The litra pound is recorded to be 324 gm, which gives an ounce to be 27 gm. Hence the weight of the solidus is 4.5 gm as recorded, equals one mithqal, equals 24 Greco-Roman carats. This coin was frequently melted down and reminted to preserve the weight. However, as mentioned earlier, the coin circulated among the Arabs with an average weight about 4.25 gm due to tear and wear. Therefore the actual mithqal or dinar should weigh 4.5 gm of pure gold. Indeed this was corrected by Caliph Umar ibn Abd al-Aziz during his reign, by changing the weight from 4.25 gm to 4.5 gm. It was reported that the Prophet Muhammad said, “The weight of the dinar is 24 qirats”. Also

Ibn Khaldun asserted the following in al-Muqaddimah: Know that there is consensus since the beginning of Islam and the age of the Companions and the Followers that the dirham of the shari’ah is that of which ten weigh seven mithqals weight of the dinar of gold … The weight of a mithqal of gold is seventy-two grains of barley, so that the dirham which is seven-tenths of it is fifty and two-fifths grains. All these measurements are firmly established by consensus. From the above hadith and historical facts, it can be established that the Islamic dinar is of pure gold which equals one mithqal or 24 qirats or 72 grains of barley, that equals 4.5 gm in modern weight. Accordingly, a barley grain weighs 0.0625 gm (4.5 gm ÷ 72), i.e. 62.5 mg. Also well-known is the fact that 7 mithqals equal in weight to 10 dirhams. Therefore, this also implies that the silver dirham is of pure silver, weighing 3.15 gm (0.7 × 4.5 gm) that equals 50 2/5 grains (3.15 ÷ 0.0625 or 0.7 × 72) as mentioned by Ibn Khaldun.

6.2.2 Inscriptions on the Islamic Gold Dinar Generally, the Islamic gold dinar does not depict pictures of caliphs, rulers, animals or other living things in accordance with Shariah that discourages such practice. The first Islamic gold dinar, i.e. that of Abd al-Malik ibn Marwan, had an inscription based on Quranic verses. One could notice that the earliest coins never had full Qur’anic verses on them. Perhaps this is because the early learned scholars could have opined that it is highly possible for people to bring coins into impure places like toilets and so forth; and also possible to lose them to the ground. Also because coins pass from hand to hand in circulation, one cannot afford to make a mistake in Qur’anic verses inscribed on the coins. Once circulated it would be extremely difficult to call them back, in case of mistakes. For example the dinar and dirham of Abd al-Malik ibn Marwan had the following inscriptions: The obverse of the coin has as its central legend the Shahada, i.e. “There is no deity except Allah alone, there is no partner with Him’. Around it is the mint date formula reading “In the Name of Allah. This dirham was struck in the year 79 AH”. The reverse of the coin has the central inscription based on Surah 112 of the Quran: “Qul huwa Allahu Ahad, Allahu-Samad, Lam Yalid wa lam Yulad wa lam Yakul-lahu Kufu-an Ahad”. The marginal legend is based on Qur’an 9:33. It states: “Muhammad is the Messenger of Allah, he was sent with guidance and the religion of truth to make it prevail over every other religion.” Note that these are not full Qur’anic verses.

6.3 Labor Theory of Value According to Ibn Khaldun, labor is a source of value. He explains in details about his theory of labor value and presents it for the first time in history. According to him,

“… everything in the world is purchased with labor. What is purchased with money or other good is purchased by labor, inasmuch as gained by labor from our body. Money or commodities indeed save us. They contain certain quantity value of labor that we exchange for what it should be, when it contains the same quantity. The value of a commodity for those who own it, and those who do not use it for himself, but exchange it with other commodity, therefore, equal to labor quantity that enable him purchasing or, directing it. Labor, therefore is a real measure of exchangeable value of all commodities” (Oweiss, 1988, p. 114). Ibn Khaldun divided all earnings into two categories, ribh (gross earning) and kasb (earning a living). Ribh is earned when a man works for himself and sells his objects to others; here the value must include the cost of raw material and natural resources. Kasb is earned when a man works for himself. Most translators of Ibn Khaldun have made a common mistake in their understanding of ribh. Ribh may either mean a profit or a gross earning, depending upon the context. In this instance, ribh means gross earning because the cost of raw material and natural resources are included in the sale price of an object. This is an important distinction because it will also dictate the type of pricing that are legitimate or sustainable. In economic term, revenue either as ribh or kasb is the value realized from man’s labor, i.e. all that is obtained through human effort. This line recognizes the unique nature of value addition and the role of humanity in the science of economics. In Arabic, the word a’mal that stands for ‘labour’ has intention attached to it. The first Hadith of prophet Muhammad as reported in the Book of Bokhari, links intention to long-term outcome. It is in the long-term that real value of any outcome is added (Islam et al., 2017). According to Ibn Khaldun, although commodity value comprises of cost from raw material and natural resources, it is through labor that value increases and hence, wealth grows. Wealth, therefore, is fully connected to human conscience, and as such to the creator, who placed humans on earth with a clear purpose (Islam et al., 2017). Without one’s effort, the opposite will occur, meaning the value will decline. The word ‘effort’ once again is integral to human conscience, for which intention features prominently. Ibn Khaldun underlines the role of extra effort that was later known as marginal productivity in the welfare of a society. His theory on labor gives the reason for the increase of cities, such as one indicated by his historical analysis that becomes major element of civilization. “Labor is necessary for revenues and capital accumulation. This is obvious in the case of manufacture (craft). Even if revenue generated from something other than manufacture, the value of generated profit (and capital) should cover labor value by which the commodity is obtained. Without labor, all other things will not be acquired.” Ibn Khaldun divides all revenues into two categories: ribh (gross revenue) and kasb (life revenue). Ribh is secured when man works for himself and sells his products to others. In this context the value should contain the cost of raw material and natural resources. Kasb is achieved when one works for himself. Therefore ribh means profit or gross revenue, depending on the context. In this instance, ribh means gross revenue because raw material cost and natural resources are included in the selling price of an object. Profit theory in modern economics is expressed as:

[Eq. 6.1] where π is the total profit, i.e., QP, total quantity sold, multiplied by unit price. TR is the total revenue, and TC is the total cost, i.e., the cost function, f(C) = a + b(C) = is the fixed cost and b is the variable cost. If π > 0, there will be a positive return, meaning there will be a profit If π < 0, there will be a negative return, meaning there will be a loss. In Ibn Khaldun’s nomenclature, kasb would be total cost and distinct from ribh (positive return or profit). In terms of the term for ‘extra effort,’ which constitutes one’s profit, is termed ‘man’s effort’: sa’i, which refers to one’s effort based on striving. Once again, in seeking a phenomenal (i.e., Islamic) economic model, one’s intention in their seeking of profit (sa’i) is fundamentally important. Striving to accumulate wealth as a primary motive is forbidden in Islamic law. So, the term ‘extra effort’ must be qualified. The one that has the intention of welfare of the society is the one that would be real productivity. When this term was later adopted as ‘marginal productivity’, the source, i.e., was removed from it. The word profit (kasb) relates to the context of wealth generation through the following quote from Ibn Khaldun (1958, p. 480):

This phrase is translated as: “then, you should know that profit [kasb] occurs with effort [sa’i] to possess things, and the dynamic intention to collect [material wealth].” Note that marginal productivity is being described as something gained in addition, the phrase being: This phrase is translated as: ‘qsd (dynamic intention) to collect [posessions]’. This ‘qsd’ is also the root word of iqtisad – the word translated in English as economics. Capital accumulation translated from – muta-mawwal – the word that actually can be both with the intention of greed-driven or good intention-driven. This accumulation is subject to market fluctuations, from which (gold and silver) are exempt. They are the basis of profit, property, and treasure. In Chapter 5 of Muqaddimah, Ibn Khaldun recognizes this process of accumulating wealth is part of the broader scheme, that is test human beings for a higher purpose. As such, Qur’anic verses are invoked to establish that all activities are part of the Universal order and the only control humans have is over their intentions. As such they would be rewarded/punished for their intention. In particular, the following verse can be referred (Qur’an 21:79):

The rough translation being: ‘And We gave understanding of the case to Solomon, and to each [of them] We gave judgement and knowledge. And We subjected the mountains to exalt [Us], along with David and [also] the birds. And We were the doer of everything.’

In addition, Ibn Khaldun acknowledged that the inspiration comes from God and without that inspiration humans cannot become productive in true sense. If we can interpret Ibn Khaldun’s idea on work, it is clear that labor is necessary and a sufficient condition for revenue and natural resources is only a necessary condition. Missed in this interpretation is the intention that goes with the word labour in Arabic, am’al, which is closer to karma than labour in modern European economics. Unlike the word sa’a or effort, which is added to am’al or labour, am’al has intention embedded into it. There are five different words (used in Islamic sources) relevant to the concept of work that is, am’al, sa’y, fi’l, kasb (all these roughly meaning work, at least outwardly), and the fifth word is the ajr, which is the compensation for work. The word am’al has four different meanings: (1) work or labor, (2) act or deed, (3) production or manufacturing, and (4) province or some part of a country (Baalabakki 1995). This word in the Quran and Hadith refers to a broad and general sense of work which is unique because of the existence of intention attached to it. As such, this work to be real the intention must be in conformance with the intention of achieving long-term success (which itself is in line with the purpose of human life). There are more than 500 verses in the Quran that refer to the am’al in its broader meaning. Even though, the true meaning of the word has been adulterated ever since the dismantling of the Rashidun Caliphate and this word came to denote not the act of manufacturing or labor but only the theological meditation and worship (Shatzmiller 1994). This is a distorted version of Islam and has been called as ‘cultural Islam’ by noted scholars (Islam, 2018). True Islam doesn’t distinguish between acts of worship and any other act, because any action must be with the intent of conforming with the long-term objective (Islam et al., 2017). As such, the Islamic orientation to economics fundamentally changes the existing capitalist understanding of economics and the purpose of wealth. By bringing a discussion of the metaphysical (i.e., philosophical truths about human existence and purpose), the purpose of labour and trade have fundamentally different meanings. As discussed in chapter 3, we hinted at there being a difference between true intention (i.e., carrying out the purpose of life, niyah in Arabic), and short-term intention that conflates notions of true demand based on needs, and false demand based on desire, regardless of the market – the consumer market, supply market, or labour market. As such, it isimportant to note that Islam does not give value on the nature of work, meaning intellectual (intangible) or manual (tangible). In conventional economics, the value addition is mainly a design consideration, hence intellectual work is worthy of higher value. In Islamic economics, the actual action is part of the universal order and as such the intangible such as intention is of the primary interest. As such, the fundamental merit of any work is subjective to the intention and is universally applicable to all parties whether they are employers, employees or self-employed, or even unemployed. The Qur’an refers to manual labor of Prophet Noah while talking about the construction of the boat (18: 77), to Prophet David referring to the production and manufacture of suits of armor (34: 10–11), tending of sheep by the Prophet Moses (28: 26–27), construction of a wall by the companion of Prophet Moses (18: 77) and by Zulqarnain (18: 86). The Qur’an even talks about smelting as an engineering knowledge given to prophet David (34:10). It also mentions the intellectual labour of Prophet Joseph who was appointed treasurer and custodian (or finance minister) by the King of Egypt

(12: 55). Efforts alone cannot produce a positive outcome; but rather, the coupling of good intention and effort does. Only under those circumstances does it become clear that good intention is necessary and effort is sufficient to produce positive outcome. In the latter section, we discuss these in the context of static intention (niyah) and dynamic intention (qsd). Ibn Khaldun explains causes of different labor revenue. They might be caused by difference in skill, market size, location, expertise (craftsmanship) or work, and in how far the authority and governors purchase final products. When a certain kind of labor becomes more expensive, namely if the demand exceeds available supply, the revenue must increase. This is a natural process as long as supply and demand both remain natural, meaning in line with overall goal of homogenizing wealth, rather than hoarding it. It is possible that high return in a manufacturing investment opportunity will attract other players, irrespective of their intention. This is a dynamic phenomenon that finally increases available supply, and lowers profit. This principle explains how original Ibn Khaldun’s idea was, in adjusting long term of the labor, and between certain profession and others. The focus, however, is in dealing with labour or skill as well as goods and did not involve transformation of information or perception into wealth. Ibn Khaldun precisely observes how income may differ in one place to another, even for a similar profession. Income for judges, craftsmen, and even burglars, for instance, is directly related with welfare levels and living standards in every city, achieved through labor results and crystallization of a productive society. Four centuries before Adam Smith floated the idea, Ibn Khaldun compares income in Fez (in today’s Morocco) and Tlemcen (in today’s Algeria). It is Ibn Khaldun, not Adam Smith, who presented, for the first time, labour contribution as wealth creation for a nation, by stating that labour increases productivity and that product exchange in a large market is the prime reason of wealth of a nation. On the contrary, a decrease in productivity may lead to a decrease in economy and income of its society. In his words, “a civilization generates large profit (income) due to large number of labor force that is the cause of profit” (Ibn Khaldun, 1958, Part 4, Chapter 14, p. 339). The concept of productivity is directly related to time-optimization. Optimization of time is done through the optimization of one’s dynamic intention (in Arabic, it is qsd). One of the unique features of the Arabic language is the existence of dynamic (qsd) and static intention (niyah). This aspect will be discussed in a latter section. At this point, it suffices to state that productivity is strictly a function of real labour and real production and not at all a function of perception. As shown in Figure 6.1, the supply and demand scenario spills over to bigger and global economy. In the information age, the supply and demand scenario has become transparent in the global scale and therefore it has become easier to access wider market, making it vulnerable to be controlled by mega corporations. Unless restricted by the notion of ‘economy for the sake of society’, in line with the general, natural motive of economics, the system becomes a field for economic exploitation, giving rise to obfuscation of natural trends of free economy. Ibn Khaldun understood the need of free economy and the role of free choice in an economy. He

wrote in the Muqaddima: Among suppressive action, and very perilous measure to the people is to compel someone to do forced work injustice. Because labor is a commodity, like the one we will show later, in income and profit, representing work value of its recipient. unfortunately most people do not have income source other than his own labor. Therefore, if they are forced to work for what they achieved through training, or compelled to do work in their own field, they will lose the result of their work, and pulled out of the greatest part of, even all, their income (Ibn Khaldun, 1958, p. 264).

Figure 6.1 A large market means consisting of high demand (D1) compared to small market (D0) even at a different price level. This also causes large investment, in turn causing high supply (S1). Through the cost and return function, a large market generates large income as well (from Koutsoyiannis, 1979). To maximize their revenue and utility level, one should be free to do what is led by their talent and ability. Through natural talent and learnt ability one can freely produce high quality objects, and often more work-unit per hour. One important aspect, however, is often overlooked. That is, Ibn Khaldun did not invent this correlation between individual rights and freedom. In fact, labour law is integral part of Shari’ah. As such, Ibn Khaldun was merely articulating Shariah law in ‘modern terms’, some of which appealed to the European philosophers and made it through to modern literature, often being perceived as something that supports Capitalism. It is important to review some of the labour laws of Islam. In examining the contrast between capitalism and Islamic economics, however, it is necessary to understand how significant a difference in teleology (i.e., overall purpose of life) results in such a different economic culture, despite the fact that both the Islamic economic system and capitalism share an understanding of a free market for prices, labour, demand and supply. Let us take the example of a labour relationship to show this contrast. For the purpose of showing a contrast between capitalism and Islam, we can take an example in Islamic economics that is the most likely to resemble capitalism: say, the relationship that exists between a slave and his master in Islamic law. So, let us review that extreme scenario and build from there the rights of labourers in Islam.

1. Islam positions the slave as a brother to his master. The Prophet Muhammad said,

“Your brothers are your slaves. Allah positions them under your power” (Bukhari, 2007, hadith 30). The Prophet mentioned the slave as a brother of his master to make their degree equal with brothers. 2. Prophet Muhammad then forbade the masters to give tasks to their slaves that were more than they could bear. If they were forced to do so, he ordered the masters to help them. The Prophet said:

“Do not burden them [the slaves], and if you give them some tasks, help them.” (Bukhari, 2007, hadith 30). 3. For labourers, the Prophet obliged the masters to give the wage of their workers in time, without reducing it even for a bit. He said:

“Give the worker his wages before his sweat dries” (Narrated by Ibn Majah, 2007, Vol. 3, Book 16, Hadith 2443). 4. Islam gives a stern warning to employers who oppress their servants or employees. The prophet said, that Allah decreed,

“I am the opponent of three on the Day of Resurrection, and if I am someone’s opponent I will defeat him: A man who makes promises in My Name, then proves treacherous; a man who sells a free man and consumes his price; and a man who hires a worker, makes use of him, then does not give him his wages” (Bukhari, 2007, hadith 2227; and Ibn Majah, 2007, hadith no. 2442). 5. Islam motivates the employers/masters to ease the burden of their employees and slaves. Prophet Muhammad said:

“The ease that you give to your slave will be a reward on your scale of deed.” (Al-Asqalani, 1996, p. 263 (Narrated by Ibn Hibban in his compilation of valid hadith, and Shuhaib al Arnauth stated that its narration is valid). 6. Islam motivates the masters and employers to be modest yet authoritative toward their labourers and servants. The Prophet said,

“Not an arrogant one, a master who is willing to eat with his slave, willing to ride on the donkey at the market, and willing to tie the goat and milk it” (Bukhari, 2011, hadith 550). 7. Islam stresses to reduce the violence toward the subordinates to the maximum. Aisha narrated,

“The Messenger of Allah – peace and prayer of Allah be upon him – never hit with his hands at all, not upon women, nor slaves” (Muslim, 2007, hadith 2328; Abu Dawud, 2008, hadith, 4786). The Prophet once met one of the companions who were hitting his slave, he was Abu Mas’ud al Anshari. At once, the Prophet reminded him from behind:

“Know O Abu Mas’ud, Allah is More Powerful to punish you that way, than your ability to punish him.” When Abu Mas’ud turned, he was surprised to see the Messenger of Allah and he freed his slave spontaneously. The Prophet praised him:

“If you had not done that, the Fire would have burned you’ (Muslim, 2007, 1659; Abu Dawud, 2008, 5159, and others). Different traditions from the Prophet deal specifically with these subjects, and many provide explicit orders, many of which are enforceable by Islamic law. The Prophet said that “Your slaves are your brothers upon whom Allah has given you authority. So, if one has one’s brother under one’s control, one should feed them with the like of what one eats and clothe them with the like of what one wears. You should not overburden them with what they cannot bear, and if you do so, help them” (Bukhari, 2007, hadith 2545). Based on such orders, which are observed as legal commands amongst Muslim legal scholars,

we find references in the earlier Islamic history. One example during the ‘Rashidun Caliphate’ was the labour policy of the Caliph Umar, who fixed the wages for personnel engaged in wars (there was no traditional military), and during his period, salaries were also revised based on several criteria such as length of service, best performance, and knowledge level. Also, Umar set a very high salary for Judges and his intention was to protect them from temptations of being swayed by bribery (Chavan and Kidwai, 2006). These episodes show that Islam has referred both to the minimum and ideal/just wage but it specifies the minimum wage so that the basic needs of a worker are met. Corresponding to the Universal Declaration of Human Rights’ call for “just and favorable remuneration,” Islam is of the view that workers’ wages should be set in a way that these satisfy all their and their families’ needs in a humane manner. In another hadith, it is mentioned that “whoever takes a public job and has no house (of his own), should have one” (i.e., government should provide housing). If he is not married, he should get married, and if he does not have something to ride, he should have one (provisioned by the government). By combining the provisions in all above hadith, it appears that Islam requires the employers to provide the workers with housing, medical facilities, job education or training, transportation, and meals. The Quran also provides general guidelines on rest and leisure and considers it a basic right (28: 73, 33: 53). We find reference in Hadith that the Prophet said, “Man owes something (of his labor and energies) to himself, something to his body, something to his wife (family), something to his eye (psychic or aesthetic satisfaction).” (Ahmad 2011, p. 598) By focusing more on the necessities and strictly need-based living conditions, Islam cares more about the real wages which need to be maintained or increased in comparison with the nominal or monetary wages. Furthermore, as pointed out by Zulfiqar (2007), in Islam there is a basis of setting the minimum as we find this rule in practice on Zakat (the second pillar of Islam that requires compulsory charity for the rich), which is applicable only if a person holds the minimum quantity of gold, silver or currency for full one year, that is, nisab. In conclusion, setting a minimum wage at a “fair,” “just,” or “living wage” level is quite in line with Islamic principles. When it comes to minimum wage labour laws, in Islam, workers’ rights and employer’s rights are both covered following the principle outlined above. Usually, government does not have any role to play in this labour relationship. However, if market imperfections are causing erosion of workers’ wages and equilibrium in the market is set at a very low level, the state can require employers to provide Ujra mithl (intrinsic wage or wage accepted by others for similar work) to the workers—a form of what is now often called “prevailing wage.” Ibn Taimiyyah (d. 1328, 1983) argued that there are certain sectors, such as, farming, building, weaving, and other public utilities that are not supported by the market forces due to nonnatural market distortions, and individuals also do not invest in these activities. It is of interest to look into the current financial infrastructure and determine the nature of distortions that has become ubiquitous. However, if the public needs these industries, they become a collective obligation (fard kifaya) where some individuals have to do these works. Such a scenario can also prevail during war time or time of national emergency. The obligation to uphold peace and order is upon each citizen and the government is merely an organizer.

We see precedent of the implementation of minimum wage during the reign of Uthman ibn Affan, the third caliph (Islam, 2016), who reported a saying of the the prophet Muhammad: “There is no right for the son of Adam except in these things: a house in which he lives, a garment to cover his nakedness, a piece of bread and water.” (al-Tirmidhi, hadith 2341). What we derive from it is basics in life, such as food to quell hunger, garments to cover nakedness, and house to protect from environmental harshness are fundamental human rights. They are not to be commodified, commercialized, or turned into a source of creating wealth. As far back as Aristotle defined the fundamental impetus of economics being welfare of the society. What does welfare mean if basic rights of the members of the society are not honoured? In summary, Islamic principles are strictly based on a phenomenal basis that is to recognize every member of the community has equal responsibility to each other and accountability to the creator. As a result long-term approach (long-term as in benefit in hereafter) is a necessary component of Islamic economics. The sufficient condition is the continuous optimization of dynamic intention at every juncture of financial decision, which must be based on need as opposed to greed and selfishness. As such, Islamic principles are diametrically opposite to both Capitalism as well as Communism. Philosophically we have characterized this as the optimization of civil individual liberties and social welfare (Chapter 3).

6.3.1 Competing Interests of Employer and Employee As such, a divergence point of Islamic economics and Capitalism concerns the conflicting interests arise between employer and employee. Islam (2017) addressed this issue. What one would expect, from a government truly adhering to Islam, that all of the factors of interest: corporations (economics), trade unions (common people, oppression), rights groups, and so on, would certainly be of concern to the ruling Caliph and the advisory board, but would only be concerned of what the Qur’an and Sunnah say on the issues, through objective empirical analysis. Though in some cases, Ijtihad (active research) and other innovations will have to be used as is required with the expansion of government (as Caliph ‘Umar has done), the ultimate ruler on policy will be the best possible interpretation of God’s will: if trade unions are complaining, the question will be “what is a decent minimum wage” according to God, to which ‘Umar and ‘Uthman have in fact ruled on 1400 years ago (For Umar, see The Voice of Islam, 1974, 300; For Uthman see Sri Lanka Foundation, 1988); if corporations want to lower it, the question will be “is it oppression against God (through the people) to do so?”; if there are reports of potential invasion from the enemy, the question will be “what does Islamic foreign policy say about this?” Even though secularist scholars like Abdullahi Ahmed AnNa’im try to give the impression that Islam was so simplistic, and the Qur’an so incomplete, that it could not possibly be used as a manual for governance (An-Na’im, 2010), we have seen throughout history that it has, and can even be applied in the context of the modern world, as outlined by Islam (2017). Islam being political as well as reformative ideologically, includes a giant change in outlook of the individual once the political comes into the mainstream; the likes of something much more ideologically revolutionary than the Iranian revolution (Kepel, 2006).

Just like the notion of humans being inherently selfish is shunned in Islam in favour of humans are inherently good, conscientious and moral, the notion of competing employer-employee relationship is entirely absent from Islamic economics. All of the injunctions in the Quran and Hadith relating to harmonious employer–employee relationships stem from the same accountability and primary motivation that’s in line with the purpose of life. No such labour law exists today. Ahmad (2011) pointed out that the Islamic labour law can be, for lack of better analogy, “analogized to International Labour Organization (ILO) labor standards which are often criticized as lacking ‘teeth’—a simplistic view that underestimates the moral force of authoritative, universally acknowledged norms.” In a way, Islamic labour law is practical only when this ‘universal authorative norm’ is present and enforced. Islam (2018) makes a case for both the need and sufficiency of such norms and the governing system in Islamic governance. In absence of such an umbrella organization, i.e., the state authority, socalled ‘Islamic financial system’ is incomplete. An Islamic state offers oversight and ensures comprehensive justice in any labour relations and as such plays an indirect role in regulating the labor market, but doesn’t interfere with the natural supply demand format. Several Muslim legal scholars (e.g., Al-Mawardi, 1996; Ibn Taymiyyah, forthcoming) acknowledge government intervention in the form the market supervisor must ensure that workers are not exploited by employers by overburdening them with work or giving less than due wages. At the same time, he also has to protect the employer from workers’ demands for higher than usual pay. If any of the two parties lodges a complaint with the inspector, it is his duty to provide justice to both of them. The role of such a ‘market supervisor’ was not formalized during the Rashidun Caliphate and prophet’s companions were clearly leery about introducing more government intervention beyond what prophet Muhammad introduced, but the spirit of such line of thought is rooted in the obligation of every individual to do his/her best to ensure justice whenever an act of injustice is observed. There are numerous verses of the Qur’an that compel citizens to ‘enjoin good and forbid evil’ (e.g. 3:110, 7:157, 16:91). In an Islamic setting, if there is a violation of labour law, it must be stopped in the form of ‘goodly advice’ and if that doesn’t succeed, it has to be brought before the justice system, which doesn’t discriminate between ‘criminal offence’ and ‘civil offence’. In Islam, opposite interests do not arise between employer and employee, investor and investment firms or banks. This is because the ultimate security and accountability both reside with the Creator and every entity is bound by the same moral code that is in line with the purpose of life. One “problem” that may occur and did occur in prophet Muhammad’s time is the voluntary or involuntary unemployment. While it is forbidden to seek welfare other than out of hunger, it was routine to have food and shelter available for the needy and no one asked the question as to why they do not work or find a job. For this purpose, the state plays a prominent role. By having the state provide for the needy, needy people are isolated from the stigma of seeking privately but perhaps more importantly elevates a collective economic problem to the level of state so if need be a comprehensive solution can be offered. In the time of draught and famine, the government of the Rashidun Caliphate took a number of special measures to counter the crisis, including temporarily halt corporal punishment for stealing committed by destitute citizens. The second Rashidun Caliph, Umar said, “God has deputized us on His slaves

(humans) to protect them from hunger, to clothe them and facilitate finding occupations for them” (cited by El-Ashker and Wilson, 2006). He also, was quoted on the topic of widespread unemployment and civil unrest in the following words, “God has created hands to work, if they can’t find work in obedience, they will find plenty in disobedience, so keep them busy in compliance before they get you busy in defiance” (cited by El-Ashker and Wilson, 2006). During the Prophet’s regime, there were hundreds of companions of the prophet that were living in the famous mosque and were not rushed out. In fact, some of the most notable companions, such as Abu Huraira had such humble start in life.

6.3.2 Demand, Supply, Price and Profit Other original contributions to economics of labor by Ibn Khaldun involve the introduction and analysis of the relationship of some economic instrument analysis such as demand, supply, price and profit. It is important at this point to note how original concepts of Ibn Khaldun have been altered to accommodate them into the capitalist theme. Even though it is seldom pointed out, just like Aristotle’s original concepts of economy was subsequently misinterpreted or twisted to make them look like Capitalistic values, Ibn Khaldun’s contributions have been marginalized or trivialized as similar to capitalistic values. In reality, each of Ibn Khaldun’s ideals had a unique starting point and each focused on natural state. Also, at the first sight some of Ibn Khaldun’s values appear to be similar to Karl Marx’s ideals as outlined in Das Kapital. Once again, the similarity is strictly external. 6.3.2.1 Demand In modern economics, demand for a commodity is based on utility to gain it, and not always based on the need for it. As discussed in earlier sections, ‘satisfaction’ is the motive behind the demand. It creates incentive for a customer to purchase in market. This notion matches Ibn Khaldun’s theory at least at the outset, therefore, it is reasonable to assume that he planted the first seed of demand theory that later on developed by Thomas Robert Malthus, Alfred Marshall, John Hicks and others. While the presumed existence of satisfaction as the primary incentive for a purchase is something disputable (as clarified in latter part of this section), in modern economics it is largely recognized that it is the primary motivation behind consumerism. If a commodity demanded to attract more customer purchase, either price or the quantity sold would increase. On the other hand, if the demand for manufacture (craft) decreases, the sale goes down and therefore the price decreases (Figure 6.2).

Figure 6.2 Derivation of demand. Graphical presentation of Utility function by Thomas Malthus and Alfred Marshall. Derived from Total Utility (TU) curve, Marginal Utility is congruent with Demand curve (D) against price and quantity (from Koutsoyiannis, 1979). Demand for certain commodities depends also on how far they will be purchased by state. Here, the role of government in homogenizing economic welfare is clarified. The Sultan (king) and ruling elite buy more quantity than the people can buy individually. A manufacturer develops when the state purchases its products. By his analytical and genius thinking, Ibn Khaldun found a concept that is known in modern economic literature as “derived demand”. He said: “Manufacture increases and goes up when demand for its products increases.” Demand for a manufacture worker is also derived from the demand for this product in the market. Once again, in Ibn Khaldun’s mind, the Islamic Caliphate was the model and absence true political Islam, at least a benevolent government was the source of a balanced economy. The notion of derived demand minus the role of truly representative government (Caliphate) or truly sustainable economy (Shariah-based) made it through to modern economics, for which ‘Derived demand’ is a term used in economic analysis that describes the demand placed on one good or service as a result of changes in the price for some other related good or service. Of course, this is not a unique relationship as numerous other factors play a role and that can have a significant impact on the derived good’s market price. In modern economic analysis, Derived demand is solely related to the demand placed on a good or service for its ability to acquire or produce another good or service. The presumption is that goods are replaceable and exchange is reciprocal, irrespective of the background. Additionally, derived demand can be spurred by what is required to complete the production of a particular good, including the capital, land, labour and raw materials required. This entire process is reversible and reciprocal – all due to the imbedded assumption that absolute commensuration has been achieved through currency of the time. In these instances, the demand for a raw material is directly tied to the demand for products that require the raw material to be produced. This formulation permeates through investment opportunities. The demand that is derived from the demand from another good can be an excellent investing strategy when used to anticipate the desire for goods outside of the specific good desired. If activity in one sector increases, any sector responsible for the first sector’s success may also see gains. In this chain of events, the common link remains the currency foundation that defines wealth.

The principles behind derived demand work in both directions. If the demand for a good decreases, the demand for the goods required to produce the item will also decrease. This is demonstrated through a classic example: called the ‘pick and shovel strategy’. This example was created in response to correlating market forces. During the gold rush, the demand for gold prompted prospectors to search for gold. These prospectors needed picks and shovels, as well as other supplies, to mine for gold. It is arguable that, on average, those who were in the business of selling supplies to these prospectors fared better during the gold rush than the prospectors did. The demand for picks and shovels was derived, to a large degree, from the demand for gold at that time. A more modern example exists within the computer marketplace. As more business became dependent on technology and households expand home computing capabilities, the demand for computers rose. Derived demand may be shown in the areas of computer peripherals, such as computer mouse and monitors, as well as in the components required to produce the computers. The components can include items such as motherboards and video cards, as well as the materials used to produce them. It is true that certain production materials may not be subject to large-scale changes based on increases or decreases in demand for a specific product depending on how widely the production materials are used. For example, cotton is a widely used material in fabric. If a particular print or color is popular during a specific season and the popularity for the print or color diminishes over the course of a few seasons, this may not have a large impact on the demand for cotton in general. 6.3.2.2 Supply Theory As generally accepted, modern price theory states that cost is the backbone of supply theory. It is Ibn Khaldun who for the first time explored analytically the role of production cost in supply and price. In searching differences between food price produced in fertile land and the less fertile one, he found the difference among others merely in production cost. In coast and hill areas, where the land is not suitable for agriculture, the inhabitants are forced to uplift the area condition and its plantation. They undertook it by giving additional work and things that need cost. All of them increases cost in agriculture product, which they include them in determining its sale price. And since then Andalusia is known for the high price … Its position is in opposite with land of Berber. Their land is very rich and fertile so that they do not need to add any cost in agriculture; therefore in that country the food price become low. (Ibn Khladun, 1958, Part IV, Chapter 12, p. 337). Besides personal and state demand, and production cost, Ibn Khaldun introduces other factors that influence the cost of commodity or service, namely (a) welfare and prosperity level of a region, and (b) wealth concentration rate and tax level imposed to intermediaries and traders. Direct functional relationship between income and consumption provided by Ibn Khaldun opens the way for consumption function theory as the cornerstone of Keynesian economics, with one qualifier. Ibn Khaldun was aware of the futility of a myopic approach to consumption, whereas Lord Keynes took the myopic approach as the only one possible. It is analogous to the

political model in which Ibn Khaldun’s non-Caliphate model has been taken to be the only practical model, ignoring the Caliphate model that was well recognized by Ibn Khaldun as the ideal political system (Islam, 2018). 6.3.2.3 Profit Ibn Khaldun also makes an original contribution to the concept of “profit”. In economic literature, a theory stating profit as a reward for uncertainty risk in the future generally refers to Frank Knight, who published his idea in 1921. Undoubtedly it is Frank Knight1 who substantially forwards a profit theory in a well-established form. However, it is Ibn Khaldun, not Knight, who put the cornerstone of this theory. More importantly, Frank Knight makes no room for discerning natural profit from unnatural profiteering. Ibn Khaldun by contrast wrote: Business [commerce] means to buy commodities, store them and wait for a market fluctuation to bring about an increase in price (of these commodities). This is called “ribh” [profit]. (Ibn Khaldun, 1958, Part V, Chapter 14, p. 366). In another context, Ibn Khaldun states again the same line of thought: The clever and experienced people in the city know that to hoard and wait for high price is not good, and its profit can reduce or lose through this hoarding. (Ibn Khaldun, 1958, Part V, Chapter 13, p. 368). The concept of profit hence becomes a reward for facing a risk. In undertaking uncertainty in the future, one who bears the risk may lose instead profit. Similarly, profit or loss may occur as a result of speculation by profit seekers in the market. As we will see in latter sections, hoarding, even as a motivation is not a starting point of Islamic economics. Ibn Khaldun merely highlights the fact that whatever is forbidden in Qur’an is actually unsustainable. Ibn Khaldun then highlights the means of natural profit making, as opposed to profiteering, by using the magical word of traders: “Buy low and sell high” (Ibn Khaldun, 1958, Part V, Chapter 9, p. 366). In its natural state, this is the only way to maximize profit. 6.3.2.4 Price If Ibn Khaldun’s magical word is applied in cost analysis, it will be clear that profit may increase, even for the price of a final product, when someone reduces raw material and other input used in production. It can be done by the means of purchasing them with discount – or in general – at a low price even from a distant market, as indicated in his explanation about the benefit of foreign trade. However, Ibn Khaldun concludes that both excessive low price and excessive high price will potentially destroy the market. This defines the sustainability range of pricing. Therefore, it is advised that a country not bring prices artificially low through subsidy or other methods of intervention. Such a policy is economically perilous because low price commodities will disappear from the market, and increase disincentive for suppliers to produce whenever their profit is directly affected. The inspiration of this pricing is clearly from Hadith of prophet Muhammad:

A man came and said: Messenger of Allah, fix prices. He said: [No], but I shall pray. Again the man came and said: Messenger of Allah, fix prices. He said: It is but Allah Who makes the prices low and high. I hope that when I meet Allah, none of you has any claim on me for doing wrong regarding blood or property. (Abu Dawud, 2007, hadith 3450). Ibn Khaldun also concludes that an excessively high price will not be appropriate with market expansion. When high-price commodities are few in the market, a high price policy becomes counterproductive and damage goods flow in the market. Ibn Khaldun hence put the basis of thought that afterward lead to formulation of disequilibrium analysis (Figure 6.3). He also mentions some factors that influence the increase in general price level, such as increase in demand, supply limitation and increase in production cost that includes sale tax as one component of total cost. This sale tax is not to be conflated with modern day government imposed sales taxes that are clearly unislamic and unsustainable.

Figure 6.3 Cost Push and Demand Pull Inflation. Economists agree that increase in cost – as illustrated by shifting Aggregate Supply upward (AS0–AS1) causes increase in general price level. From P0 to P1. Similar effect occurs when there is an increase in Aggregate Demand – illustrated by shifting upward of AD curve (AD0–AD1) (from Branson, 1989). After his analysis about what creates overall demand in a growing economy, Ibn Khaldun states the following: Because demand for luxury goods finally becomes habits and then becomes necessity. In addition, all labor becomes expensive in the city, convenience becomes expensive because there exists many purposes that also become demand of luxury, and also because the government imposes tax in the market and business transaction. This is reflected in sale price. Convenience tools, food and labor hence become expensive. As a result, expenditure increase drastically, proportionate to the culture (city). A big sum of money is spent. In this situation, people need big amount of money to acquire necessities for themselves and their family, and other need as well. (Ibn Khaldun, 1958, Part IV, Chapter 13, p. 338) And he therefore concludes: “When goods are less, its price increases.”

By reading carefully both quotations above, it is clear that Ibn Khaldun finds what is known now as cost-push inflation and demand pull inflation. In fact, he is the first philosopher in history who systematically identifies factors that influence both commodity price and general price level (Figure 6.3).

6.4 Zero Waste Economy What stands for interest or usury in an economic system applies to waste in a societal lifestyle. The Qur’an assigns humans the purpose of acting as the Viceroy of the Creator and as such, it is a collective duty of humans to preserve peaceful natural order. Nature is zero-waste, so should be a conscientious development project. Mankind is, according to Islam, a trustee and steward over this earth and its resources and Islam teaches the preservation and maintainance of divine natural equilibrium (Mezan). Zakat (mandatory charity) is due upon every Muslim, every crop and livestock seller, who must pay a proportion of their wealth/produce each year to the poorest in society. This concept alone were it applied by everyone in the world would revolutionise, indeed save, lives. An obstacle to this objective is Satan’s avowed contempt for a natural process. In Qur’an 4:119 it is stated that Satan vowed to change natural pathways. So, naturally there would be numerous attempts to obfuscate any effort to maintain natural order. It is no surprise that today’s technology development cannot fathom a zero-waste scheme and start all economic calculations with false doctrines such as greed is infinity, need is strictly a matter of desire, humans are inherently selfish, etc. We have seen in previous chapters how the sugar culture is introduced and after that progressively worse technologies have been introduced. It is anecdotally related to HSSA (Honey → Sugar → Saccharine → Aspartame) degradation. That is, the move from Honey to Sugar represents Allah’s order and its war against Satan’s conspiracy to change the natural pathway. The entire environment is a trust given to us and we must do our best to preserve it as Allah has ordered. The fundamental starting point of Islamic economics, is zero-waste, meaning in absolute harmony with the nature. As such, there is no room for making any intention to hoard, be greedy, or to exploit. The fundamental Qu’ranic decree is to not waste and eat only pure food. Here is an example (20:81):

Rough translation of the meaning: “Eat of the good (tayyib) from your sustenance, and be not inordinate (tatgha) with respect to them, lest My wrath should be due to you, and to whomever My wrath is due shall perish indeed.” The key words are tayyib and tatgha. In today’s terminology, tayyib would stand for truly organic, whereas tatgha would stand for violating the principle of zero-waste. This also qualifies what waste is. It has several components to it.

The Prophet Muhammad said ‘He who sleeps while their neighbour is hungry is not one of us’. In economic terms, this Hadith emphases the need to consider neighbours in even basic human needs. Uthman ibn Affan reported: The Prophet said, “There is no right for the son of Adam except in these things: a house in which he lives, a garment to cover his nakedness, a piece of bread and water.” Source: Sunan al-Tirmidhi 2341 From the hadith quoted earlier about man’s right to food, water, shelter, and clothing, at their provision falls within basic human rights. In addition, it is implied this is a collective responsibility of any community. The Qur’an forbids extravagance (6:141), use of excess (7:31), or hording (102:4). It means that any extravagance is a waste and thus inconsistent with Islamic values. This is in sharp contrast to today’s culture, in which we eat supermarket fruit out of season that has been flown thousands of air miles across the world, donate minimal amounts to the poor, underpay those that grow it and waste vast amounts in the process including unpicked fruit in the field and waste through inadequate storage and transport. We discard misshapen produce and dispose of unsold supermarket food while castigating the poor who seek to eat what is perfectly viable. Islam guides believers to live their lives in moderation and prohibits wastage, whether it be with their time, energy, wealth or food. It is unlawful, in Islam, to throw away food as every morsel is a blessing from God. Muslims follow this teaching and cultivate a respect for all resources; water, food, clothing, and other natural resources. Islam guides believers to live their lives in moderation, regardless of the abundance of scarcity of the provisions. Islamic history is replete with examples of resource conservation, for example at the Sulaiymaniye Mosque built by Sinan between 1550–1577 air, containing soot, from burning lamps was taken away, purified and utilized to make ink. Islam provides a strong ethical framework for the protection and maintenance of the environment and the elimination of waste. The following hadith is relevant to food consumption: “The first calamity for this nation after the Prophet’s death is fullness of their stomachs; when their stomachs became full, they became obese and their hearts weakened and their desires became wild” (Al-Dhahabi, 2009, vol. 3, p. 335). The Quran is clear in warning us of the tests that come with attaining material possessions or being excessive in our consumption. If one considers the following facts regarding plastics, it becomes easy to understand what fundamental changes need to be done in order to have a fresh start in terms of zero-waste. Plastic is an important example because plastic is the epitome of today’s wasteful manufacturing practices and yet plastic is principally derived from crude oil – not a single drop of which is inherently unsustainable. In fact, crude oil and its derivatives have been used for millennia without any concern of sustainability until the onset of the plastic culture. Over 8.3 billion metric tons of plastic has been thrown away in the last 68 years

80% of this plastic has leaked into the ocean killing marine life and seaming into our food chain – That is the weight of 8 billion elephants or 55 million jumbo jets. In the UK alone, 13 billion plastic water bottles are thrown away a year – only 3 billion are recycled. Every human being on the planet will use 136 kilos of single-use plastic a year Unlike today’s mantra of the ‘3R’s’, Islam stops unsustainability at its track. There is no recycling of something that is already unsustainable. The only solution is to stop manufacturing unsustainable products that are inherently a waste. Any waste is extravagance and any extravagance is taghut, the transgression of moral compass. This is made clear in the following Qur’anic verses: “And it is He [God] who has made you successors [khala’ifa] upon the earth and has raised some of you above others in degrees [of rank] that He may try you through what He has given you. Indeed, your Lord is swift in penalty; but indeed, He is Forgiving and Merciful.” (Surah Al-An’am – 6;165). “[A]nd waste not by extravagance. Verily, He likes not the Musrifoon [those who waste by extravagance].” (al-An ‘am 6:141). Ibn Maajah (2007, hadith 419)2 narrated from ‘Abdullah ibn ‘Amr ibn al-’Aas that the Prophet passed by Sa‘d when he was doing ablution, and he said, “What is this extravagance, O Sa‘d?” He said: Can there be any extravagance in ablution’? He said, “Yes, even if you are on the bank of a flowing river.” The Arabic text is:

It uses the root word ‘srf’ that stands for equitable exchange, exceeding which is extravagance. It is an obligation to avoid extravagance at every step of mass and energy consumption. This can only be done through dynamic optimization based on re-aligning dynamic intention (qsd) with static intention (niyah), which must be to restore natural order. In Qur’an 55:8, it is clearly stated that creating imbalance in natural order is strictly prohibited. This imbalance starts with a personal and very subjective choice of unsustainable intention. It turns out freedom of intention is the only freedom humans have, both Islamically and scientifically (Islam, 2014) and that’s what should be focused in developing sustainable economy. In this regard, the discussion of both static and dynamic intention and their role in economy is of paramount importance.

6.5 Role of Government in State’s Economy What makes Ibn Khaldun differ from his western successors, especially Classicalist writers is that Ibn Khaldun believes that government plays a critical role in the economy. The government plays an important role in growth and in the country’s economy in general

through the purchase of goods and services through fiscal policy, namely tax and expenditure. The government may also provide an incentive environment for work and prosperity, or the opposite: an oppressive system that finally becomes self-defeating. Although Ibn Khaldun considers all government laws – economic or otherwise – as unjust in non-Caliphate governments which lack the moral authority to implement laws (Islam, 2016; Islam 2018), lawsplay an important role in modern economies through purchases on a big scale. Today’s models are all based on the GDP calculation that inherently assumes that Government expenditure stimulates the economy using income that increases through multiplier effect. However, according to Ibn Khaldun, if the Sultan (king, government) accumulates income from tax, business becomes slow and the country’s economic activities will be affected significantly through a multiplier effect (Muqaddima, Part III, Chapter 40, p. 257). In general, it is known that welfare programs reduce poverty, help widows, orphans and blinds should be launched (if not they become a heavy burden to the state treasury). Even in non-Caliphate entities, Ibn Khaldun recommended that the government should spend its tax income wisely to raise their condition in order to maintain their rights and save them from danger (Ibn Khaldun, 1958, Part III, Chapter 51, p. 285). None of these is any issue with a truly Islamic government. As stated earlier, the role of government in personal lives of citizen is minimal within Islam. The fundamental nature of Islam is that it gives individual the freedom and accountability and the role of government is limited to organizing activities that are helpful to the citizen. Each citizen is obliged to uphold Islamic morals with accountability, and whenever they experience or witness injustice, it is an obligation upon them to take corrective measures. There are numerous verses of the Qur’an that compel citizens to ‘enjoin good and forbid evil’ (e.g. 3:110, 7:157, 16:91, 31:17). Prophet Muhammad contextualized this notion. He said, “Whoever among you sees an evil action, let him change it with his hand [by taking action]; and if he cannot, then with his tongue [by speaking out]; and if he cannot, then with his heart [by feeling that it is wrong] – and that is the weakest of faith” (Muslim, 2007, hadith 49). This principle applies whenever there is a violation of any law, including labour law. As stated earlier, Islam does not discriminate between ‘criminal offence’ and ‘civil offence’ and an act of offence committed against the environment, or humans bear the same culpability. In this process, if a good citizen is commanded to do something, he should be the quickest of people to do it, and if he is forbidden to do something, he must be the one who keeps furthest away from it. Qur’an is clear on this this and stated (interpretation of meaning): “O you who believe! Why do you say that which you do not do? Most hateful it is to Allah that you say that which you do not do.” (Qur’an: 61:2–3). Curiously, this moral code is equally applicable to all citizens and more importantly to the moral elites, who are supposed to be at the helm of the government. Unlike modern-day governance, Islamic government officials are held at a stricter moral standard than general public at large. For Islamic governance, social justice is the raison d’être. In Islam, governance means ensuring social justice, which in turn means redistribution of power and income in such a way that, when introduced at the lowest level, it spreads upward to all the reaches of society. This is quite the opposite of what the modern day economies have done. Today, practices in the form of production and income filtering down to lower levels; the

trickle-down approach is the essence of governance. The standard was set by the prophet, then consolidated by the first caliph, Abu Bakr As-Siddiq, who said, “The weak among you shall be strong with me until their rights have been vindicated, and the strong among you shall be weak with me until, if the Lord wills, I have taken what is due from them …” (Qasmi, 2008, p. 98), and considered it one of the central objectives of his public policies. On the relationship between ruler and citizens, Umar, the 2nd Caliph, said, “People usually hate their ruler, and I seek the protection of Allah lest my people should have similar feelings about me” (Numani, 1943, p. 299). No senator or president could have lectured Umar about the abuses of campaign financing or could have accused Umar of having handed over the entire social system to corporate elites. How Umar practiced social equality was best demonstrated when he entered Jerusalem as a liberator, not as leaders would act. He entered Jerusalem in humility, walking on foot with his servant comfortably riding a camel, as they had been taking turns riding. He, then, gave Muslims another practical example of how to treat Christians and non-Muslims, when the Prelate of Jerusalem asked him to pray in the sepulcher, but Umar chose to pray some distance away from the church: saying that he was afraid that in the future Muslims could use this as an excuse to take over the church to build a masjid claiming that this is the place where Umar prayed – which is a clear practical lesson in respecting others. There were numerous examples of the caliph moving around incognito at night to find out if anyone was suffering from hunger or economic deprivation (Nu’mani, 2015). One such example is recounted here. A woman was trying to lull her children to sleep by pretending to cook food in an empty pot. Umar was shocked when he entered this house and asked the woman why she had not sought assistance from the public treasury. The woman, not knowing the identity of Umar, said, “Who cares for poor people?” Umar then brought grain for her, carrying it by himself on his back, and cooked for the hungry children. He was so kind and generous with them to the extent that the woman said, “I wish you were the caliph.” Umar told his servant, who was remonstrating, that Allah would hold the caliph accountable for the hunger and poverty, which people suffered from. During his reign, Umar gave the poor stipends from the public treasury without any discrimination based on religion. After taking the surrender of Jerusalem and completing the tour of Syria, Caliph Umar delivered an important speech that clearly set out his understanding of his role as caliph. He stated,

“Imbibe the teachings of the Qur’an, then practice what the Qur’an teaches. The Qur’an is not a theory; it is a practical code of life. The Qur’an does not only bring you the message of the hereafter, it also primarily intends to guide you in this life. Mold your life in accordance with the teachings of Islam, for that is the way of your wellbeing. By following any other way you will be inviting destruction. Fear Allah, and whatever you want seek it from Him. All men are equal. Do not flatter those in authority. Do not seek favors from others. By such acts you demean yourself. And remember that you will only get what is ordained for you, and no one can give you anything against the will of Allah. So why do you then seek favors from others who have no real control? Only supplicate Allah for He alone is the Sovereign. And speak the truth. Do not hesitate to say what you consider to be the truth. Say what you feel. Let your conscience be your guide. Let your intentions be good, for verily Allah is aware of your intentions. In your deeds your intentions count. Allah has, for the time being, made me your ruler. But I am one of you –, no special privileges belong to rulers. I have some responsibilities to discharge, and in this I seek your cooperation. Government is a sacred trust, and it is my endeavor not to betray the trust in any way. For the fulfillment of this trust I have to be a watchman …” (from: Hasan, 1997, p. 477). In another account: Someone asked Umar, “What is your criterion for selecting a man for appointment as a Governor?” Umar said, “I want a man who when he is among men should look like a chief although he is not a chief, and when he is a chief, be should look as if he is one of them.” Umar wanted his comrades to advise him regarding the selection of a right man for the office of the Governor of Kufa. One man rose up to say that he could suggest a man who would be the fittest person for the job: Umar enquired who was he, and the man said,”, Abdullah bin Umar” Umar said, “May God curse you, you want that I should expose myself to the criticism that I have appointed my son to a high office. That can never be” Once the post of the Governor of Hems fell vacant, and Umar thought of offering it to Ibn Abbas. Umar called Ibn Abbas and said, “I want to appoint you as the Governor of Hems, but I have one misgiving.” “What is that”, asked Ibn Abhas. Umar said, “My fear is that some time you would be apt to think that you are related to the Holy Prophet, and would come to regard yourself above the law.” Ibn Abbas said, “When you have such a misgiving I would not accept the job.” Umar then said, “Please advise me what sort of man should I appoint.” Ibn Abbas said, “Appoint a man who is good, and about whom you have no misgiving” (Hasan, 1997, p. 155). In that sense, being within the government system was a great disadvantage and certainly the Rashidun Caliphate. The following episodes consolidate this notion. When receiving a gift of sweets from his governor in Azerbaijan, Umar inquired if all the people there ate the sweet. The answer was that it was reserved for the elite of the society. Umar then made the following order to the governor “Do not satisfy yourself from any kind of

food until all the Muslims eat their fill from it before you”. Umar once stood guard in the night with a companion to watch over some travellers. A baby was crying but the mother was unable to make it stop. Umar asked what was wrong. She said that the baby refuses to wean. He asked why she would want to wean her baby who was still young. She replied without knowing who he was that “Umar only prescribes a share of the Treasury for the weaned ones”. Umar was devastated at hearing this statement. At dawn prayer, his voice was almost incomprehensible from his weeping. Umar felt himself having wronged those babies who may have died from being weaned too early. He then ordered that a share of the Treasury be prescribed for every child from birth. The obligation of the government officials goes beyond their own selves. For instance, Umar kept his family’s activities under tight scrutiny in case they are seen to be abusing their status because of their relationship to Umar. Even when what they did was legal, he was still angered, and if they benefited financially, even if indirectly, he forbade from retaining such financial gains. Umar had a precept: “If any of you saw any of your brothers committing a slip, you should (screen him and) help him. You should ask Allah to repent on him and you should not assist Satan against him”. This would prevent back-biting and gossiping – two of the most common sins. Once a woman brought a claim against Umar. When Umar appeared on trial before the judge, the judge stood up as a sign of respect. Umar reprimanded him saying “This is the first act of injustice you did to this woman” (Barnes, 1984, p. 28). The following hadith makes this point clearer as applied in the field of public administration:

Narrated Jabir bin Samura: The People of Kufa complained against Sa‘d to ‘Umar and the latter dismissed him and appointed ‘Ammar as their chief. They lodged many complaints against Sa‘d and even they alleged that he did not pray properly. ‘Umar sent for him and said, “O Aba ‘Is-haq! These people claim that you do not pray properly.” Abu ‘Is-haq said, “By Allah, I used to pray with them a prayer similar to that of Allah’s Apostle and I never reduced anything of it. I used to prolong the first two rak‘at of ‘Isha prayer and shorten the last two rak‘at.” ‘Umar said, “O Aba ‘Is-haq, this was what I thought about you.” And then he sent one or more persons with him to Kufa so as to ask the people about him. So they went there and did not leave any mosque without asking about him. All the people praised him till they came to the mosque of the tribe of Bani ‘Abs; one of the men called Usama bin Qatada with a surname of Aba Sa‘da stood up and said, “As you have put us under an oath; I am bound to tell you that Sa‘d never went himself with the army and never distributed (the war booty) equally and never did justice in legal verdicts.” (On hearing it) Sa‘d said, “I pray to Allah for three things: O Allah! If this slave of yours is a liar and got up for showing off, give him a long life, increase his poverty and put him to trials.” (And so it happened). Later on when that person was asked how he was, he used to reply that he was an old man in trial as the result of Sa‘d’s curse. ‘Abdul Malik, the sub narrator, said that he had seen him afterwards and his eyebrows were overhanging his eyes owing to old age and he used to tease and assault the small girls in the way. (Bukhari, 2007, hadith 755). There are broad guidelines for government interference are in the areas of crisis, including unemployment, general famine, natural disaster, etc. For these cases, the state plays a prominent role. By having the state provide for the needy, isolates the needy people from the stigma of seeking privately but perhaps more importantly elevates a collective economic problem to the level of state so if need be a comprehensive solution can be offered. In the time of draught and famine, the government of the Rashidun Caliphate took a number of special measures to counter the crisis, including temporarily halt corporal punishment for stealing committed by destitute citizens. The second Rashidun Caliph, Umar said, “God has deputized us on His slaves (humans) to protect them from hunger, to clothe them and facilitate finding occupations for them” (cited by El-Ashker and Wilson, 2006). Ibn Khaldun saw another role for the government. Demand for certain commodities depends also on how far they will be purchased by state. Here, the role of government in homogenizing economic welfare is clarified. Based on the non-Caliphate model, it is said that the Sultan (king) and ruling elite buy more quantity than the people can buy individually. A manufacturer develops when the state purchases its products. By his analytical thinking, Ibn Khaldun found a concept that is known in modern economic literature as “derived demand”. He said: “Manufacture increases and goes up when demand for its products increases.” Demand for a manufacture worker is also derived from the demand for this product in the market. Once again, in Ibn Khaldun’s mind, Islamic Caliphate was the model and absence true political Islam, at least a benevolent government was the source of a balanced economy. The notion of derived demand minus the role of truly representative government (Caliphate) or

truly sustainable economy (Shariah-based) made it through to modern economics, for which ‘Derived demand’ is a term used in economic analysis that describes the demand placed on one good or service as a result of changes in the price for some other related good or service. Of course, this is not a unique relationship as numerous other factors play a role and that can have a significant impact on the derived good’s market price. Ibn Khaldun also mentions some factors that influence the increase in general price level, such as increase in demand, supply limitation and increase in production cost that includes sale tax as one component of total cost. For an Islamic economy, sales tax is not option because it inherently interferes with a free market economy. It is a question, however, if certain commodities can be controlled or certain taxes imposed to curtail excessive movement of certain commodity. The point here to make is that, Islam never allows to intervene with natural course of the economy by the government. Even during draught or natural disasters, it is not considered to be government’s role to impose taxes or fix prices. Charity is always welcome but not price fixing. Similarly, any sales tax is not Islamic because people are buying certain types of goods. If they are haram (forbidden or unsustainable), then they should be banned. If they inflict on human rights, either price should be reduced or money can be given to poor people to buy basic needs (depending on if the seller is being fair). But it is not fair for the ruler to impose non-Qur’anic taxes on the population because they are not permitted in Islamic law generally. A reference making this legal point was made by Ibn Taymiyyah (translation forthcoming), who forbade the implementation of penalties that the seller does not deserve, i.e., he is not breaking any rules but is selling at a fair/analogical price – because that would be oppression on the honest seller. This is based on the fundamental principle of the Prophet based on the hadith that Allah is the fixer of natural prices. After his analysis about what creates overall demand in a growing economy, Ibn Khaldun states the following: Because demand for luxury goods finally becomes habits and then becomes necessity. In addition, all labor becomes expensive in the city, convenience becomes expensive because there exists many purposes that also become demand of luxury, and also because the government imposes tax in the market and business transaction. This is reflected in sale price. Convenience tools, food and labor hence become expensive. As a result, expenditure increase drastically, proportionate to the culture (city). A big sum of money is spent. In this situation, people need big amount of money to acquire necessities for themselves and their family, and other need as well. (Ibn Khaldun, 1958, Part IV, Chapter 13, p. 338) And he therefore concludes: When goods are less, its price increases. Therefore, it is advised that a country not bring prices artificially low through subsidy or other methods of intervention. Such a policy is economically perilous because low price commodities will disappear from the market, and increase disincentive for suppliers to produce whenever their profit is directly affected. The inspiration of this pricing is clearly from Hadith of prophet Muhammad.

6.6 Macroeconomy and Theory on Money In macroeconomics, Ibn Khaldun also makes contributions to the theory of money. According to him, money is not the real form of wealth, but an instrument where the wealth may be obtained. He is the first writer who presented the prime function of money as a measure of value, store of value and numeraire. Mines, gold and silver as (measure of) value for capital formation … considered as wealth and property. Even in a certain situation, everything is obtained, the final purpose only to acquire them. Everything depends on the fluctuation from which (gold and silver) are exempted. They are basis for profit, property and wealth. (Ibn Khaldun, 1958, Part V, Chapter 1, p. 354). The real form of wealth is not money. The wealth is created or transformed through labor in the form of capital formation in the real measure. Hence it is Ibn Khaldun who for the first time differentiates between money and real wealth; although he realizes that the latter is obtained by the former. However, money plays a more efficient role than bartering in business transactions in a society, where one exchanges to each other the result of their labor, both in the form of goods and services, to fulfill the need that cannot be fulfilled individually. Money can also facilitate goods flowing from one market to the other, even across the country’s border. The most important aspect of money and the use of money is the Islamic description of human needs. Islam recognizes that man has two broad types of needs (Hassan, 1995): 1. Mundane, that is basic materials needed for sustaining physical life. This for humans is in the form of food, clothing, and shelter. These needs themselves set humans apart from other creatures on earth as no other creature has the need of clothing. 2. 2. Spiritual, that is an environment, which allows full and free expression to the humanistic urge to choose moral ideals. These two aspects are interlinked in Islam. There is no spiritual growth without physical growth (Qutb, 1979). For instance, the fundamental pillars of Islam starts with the foundation of Proclamation of Shahdah (major and minor premises, in conformance with Trust or Iman, spiritual, continuous or static), then moves to Connection (Salah, both spiritual and physical, five times a day), Fasting (Saoum, both physical and spiritual, one month per year, only for physically fit), Pilgrimage (physical and spiritual, only for financially solvent ones), and compulsory charity (physical and spiritual, only for financially solvent). They all involve both spiritual and physical means and two of five of them involve financial solvency and require monetary sacrifice. So, money in Islam is only for the purpose of reaching this higher goal of fulfilling both mundane and spiritual needs. This principle is encapsulated in Qur’an 62:10:

A rough translation of the meaning is: And when the prayer (salah) has been concluded, disperse within the land and seek from the bounty of Allah, and remember Allah often that you may succeed. (Qur’an 62:10). It turns out that material progress is an inalienable ingredient of Islam is in line with divine objective of the earth being a testing place for humanity (Qutb, 1979). As such material needs are limited and there is no excuse for excess. If properly executed, zero-waste scheme is inherent to human ideals that exclude consumer gluttony. On the contrary, today’s economic models are principally dependent on every increasing consumer need and a wasteful lifestyle. Consumerism is condemned in Islam. In Qur’an (47:12), it is stated (translation of meaning), “And those who do kufr (insolent disobedience) avail of material things and eat as do the animals, their abode is Hell”. Excessive attention to materialistic desires can land one to hell, as clearly pointed out in Qur’an 79:38–39 as:

A translation of the meaning being: “And [who] preferred the life of the world, Then indeed, Hellfire will be [his] refuge.” This approach then automatically becomes long-term, hence sustainable.

6.6.1 Zero-Interest Economy Just like waste-based technology development is not a starting point, interest-based scheme is not a starting point for Islamic economics. The notion of interest is synonymous with modern civilization. It has become so completely institutionalized and accepted in modern economies that it is almost impossible to conceive that there are some who completely oppose it and refuse any transactions that involve interest. Ibn Khaldun knew well the peril of a ‘usury-based system’ whereas modern economists while taking Ibn Khaldun’s economics model verbatim completely ignored his Caliphate model. Interest is strictly forbidden in Islam. When one reads the Islamic texts concerning interest, one is immediately taken by how stringent the warnings are against any involvement in interest. Islam prohibits a number of immoral acts such as fornication, adultery, homosexuality, consuming alcohol and murder. But the variety of discussion and extent of warnings for these other acts is not of the same level of those related to taking interest. In the word of Qutb (1999), “No other issue has been condemned and denounced so strongly in the Quran as has usury.” For instance, consider some of the verses of the Qur’an listed below (translation of meanings).

“O you who have Iman (Trust in God), do not consume interest, doubled and multiplied, but fear God that you may be successful. And fear the Fire, which has been prepared for the Kafir (insolent and disobedient).” (Quran 3:130–131). “Those who consume interest cannot stand [on the Day of Resurrection] except as one stands who is being beaten by Satan into insanity. That is because they say, Trade is [just] like interest.’ But God has permitted trade and has forbidden interest. So whoever has received an admonition from his Lord and desists may have what is past, and his affair rests with God. But whoever returns [to dealing in interest or usury] those are the companions of the Fire; they will abide eternally therein. God destroys interest and gives increase for charities. And God does not like every sinning disbeliever.” (Quran 2:275– 276). This latter verse is profound and speaks to the fact that affinity to interest is a form of addiction and a person loses all sense of logic, remaining oblivious to the fact that interest or usury is the root of discontent in this life, without even considering the longterm consequences to self and the society at large. In previous chapters, we have seen how myopic vision that is completely obscured by the lust of the short-term pleasure (and avoidance of short-term pain) can lead to this attitude. This is the state of his “insanity” in this world: since a man will rise in the Hereafter in the same state in which he dies in the present world, he will be resurrected as a lunatic. Secondly, the verses make it quite clear that there is a difference between legitimate business transactions and interest. In scientific term, this is the difference between sustainable and unsustainable. Figure 6.4 depicts this bifurcation. The zero-interest base is also 100% Iman (trust in God and trust in long-term consequences) that leads to sustainable development and true economic growth Such growth starts off exponentially but with time tapers off due to homogenization of the collective wealth. The bottom graph, on the other hand, represents interest-based economy that is inherently implosive and can lead to economic extremism – the kind we have experienced in modern era, as discussed in previous chapters. In our previous work, we have defined the top graph as ‘intention-based’ economy (Zatzman and Islam, 2007).

Figure 6.4 Difference between zero-interest economy and interest-based economy is glaring. These verses clearly state that God “destroys usury and gives increase for charities.” Scientifically, it means by taking the short-term approach, there may be immediate benefit but in the long-run it would be devastating for the economy as well as it lead the path of God’s wrath in hereafter. Interest is all about amassing more money, even without putting the money at risk. This, in the long-run, however, does not produce happiness, although creating an illusion. A General Social Survey (GSS) study reported in Business Week (October 16, 2000) concluded that money was not buying happiness and the new life style and its aftershocks are causing the rise of unhappiness. According to that study, although there was a per capita increase in income between 1970 and 1998, Americans, to the contrary, grew less happy. The new social tendencies overshadowed any material gains. The study found that although extra income brings extra happiness, such impact was surprisingly poor. It also found that factors, such as gender and material status, weigh more heavily. Another find was that women are growing more unhappy than men. The increase in divorce and separation between spouses is having a negative impact on the family structure and the psychology of its members. Business Week concluded: “At the very least, it suggests that those who think income gains alone guarantee greater happiness are deluding themselves. And it implies that some apparent aspects of the New Economy, such as more bouts of unemployment and greater income inequality, carry significant psychological costs.” We have seen in previous chapters, the state of the global economy is a vindication of Qur’an’s ruling on interest-based economy.

Zalloum, A.Y., 2002, Painting Islam as the New Enemy: Globalization & Capitalism in Crisis (Technology One Group S.A. 2002), p. 357. Qur’anic verses continue to pound on the interest-based financial system. In the following verse, Allah literally declares war against those that are involved in usury. “O you who have believed, fear Allah and give up what remains [due to you] of interest, if you should be believers. And if you do not, then be informed of a war [against you] from Allah and His Messenger. But if you repent, you may have your principal?[thus] you do no wrong [to others], nor are you wronged.” (Quran 2:278–279). Who in his right mind would expose himself to a declaration of war from God and His Messenger? Undoubtedly, a stronger threat one will rarely find. At the end of the verse, God makes it very clear why interest is forbidden: it is wrongdoing. The Arabic word for such is dhulm, meaning a person has done wrong to, harmed or oppressed another person or his own soul. This happens to be the same word that was used in the context of Adam and Eve’s first ‘sin’ of disobeying God’s command. This verse demonstrates that interest is not forbidden simply due to some ruling of God without any rationale behind that ruling. Interest is harmful and sows the seed of continuing oppression, social disorder and economic extremism. In addition to the verses of the Quran, Prophet Muhammad contextualized the effect of interest. For example, the following statement clearly demonstrates the gravity of this action: “Avoid the seven destructive sins: associating partners with God, sorcery, killing a soul which God has forbidden – except through due course of the law, devouring interest, devouring the wealth of orphans, fleeing when the armies meet, and slandering chaste, believing, innocent women.” (al-Bukhari and Muslim) In fact, another statement of the Prophet should be sufficient to keep any God-fearing individual completely away from interest. The Prophet (peace and blessings of Allah be upon him) said: “One coin of interest that is knowingly consumed by a person is worse in Allah’s sight than thirty-six acts of illegal sexual intercourse.” (al-Tabarani and al-Hakim) The Companion Jaabir narrated that the Messenger of God (peace and blessings of God be upon him) cursed the one who takes interest, the one who pays interest, the witnesses to it [that is, the interest contracts] and the recorder of it. Then he said, “They are all the same.” (Muslim) The Prophet’s words also explain that there is no difference between the one who pays interest and the one who receives it. This is because they are both involved in a despicable practice and, hence, they are equally culpable. The following Hadith of the prophet highlights the trickery of the Money god, who implements an obsession for Money, Sex, and Control. “If illicit sexual relations and interest openly appear in a town, they have opened themselves to the punishment of God.” (al-Tabarani and al-Hakim)

6.6.2 Explanations and Theories There has been no shortage of scholars that routinely touted interest-based economy as the only way to drive social orders. These scholars all find a way to justify the existence of interest, each advancing some theory to support his conclusion. In the history of economic thought, one can find the following theories justifying interest. 1. The “Colorless” Theories (as Boehm-Bawerk calls them): These were advanced by Adam Smith, Ricardo and other early economists. This theory has many flaws, including conflating interest with gross profit on capital. Ricardo further traced all value of capital back to labour but somehow failed to note that it was never labour that was receiving the payment for said value, making such suggestion preposterous. 2. The Abstinence Theories: These kinds of theories have popped up every now and then. Economists discovered that “abstinence” may not be a good word to use and would often change it to other terms, such as “waiting” (a la Marshall). Interest is, in essence, the wage one receives for “waiting” or “abstaining” from immediate consumption. This theory failed because it seems to think that savings are solely a function of interest, which has been found not to be true. Obviously, simple passage of time cannot begin to generate, let alone guarantee economic growth. 3. Productivity Theories: The proponents of this theory see productivity as being inherent in capital and therefore interest is simply the payment for that productivity. The theory, as put forward by Say, assumes that capital produces surplus value. Once again, there is no proof to support that claim. The most that one can claim is that some value has been created, which is a payment to capital, but one cannot prove that excess or surplus value has been created, which is the essence of their claim that interest is justified. Of course, these theories also completely ignore the monetary factors when analyzing interest. 4. Use Theories: Boehm Bawerk (1959) rejected the validity of the assumption that there was beside each capital good a ‘use’ thereof which was an independent economic good possessing independent value. He further emphasized that in the first place, there simply is no such thing as an independent use of capital,’ and, consequently, it can not have independent value, nor by its participation give rise to the phenomenon of excess value.’ To assume such a use is to create an unwarrantable fiction that contravenes all fact and start of a paradoxical cognition process. 5. Remuneration Theories: This group of economists sees interest as the remuneration of “labor performed” by the capitalist. Although supported by English, French and German economists, this view clearly emerges from conflation of ‘labour’ with equity. In a dynamic process, labour adds value and that labour can be intellectual (intangible) or physical (tangible) but to claim that labour to be inherent to the owner of an equity, irrespective of the actual economic process is simply an error. 6. The Eclectic (combination of earlier theories, such as Productivity and Abstinence) Theories: This line of thought seems to reveal a symptom of dissatisfaction with the doctrine of interest as presented and discussed by the economists of the past and the

present. And, as no single theory on the subject is in itself considered satisfactory, people have tried to find a combination of elements from several theories in order to find a satisfactory solution of the problem. They often tout lower interest rate, similar to the doctrine of 3R’s and waste minimization in the context of sustainable development. 7. Modern Fructification Theory: Henry George was the developer of this theory but it never carried enough weight to have many, if any, followers. 8. Modified Abstinence Theory: Yet another unique theory, proposed by Schellwien; it never had much impact. 9. The Austrian Theory (The Agio3 or Time-Preference Theory): This is the view that Boehm Bawerk himself endorses. According to this theory, interest arises “from a difference in value between present and future goods.” Zatzman and Islam (2007) criticized this theory in the context of ‘Islamic Banking’, which seems to have adopted this theory, albeit without explicit assertion. 10. Monetary Theories (the Loanable Funds Theory, the Liquidity Preference Theory, the Stocks and Flows Theory, the Assets-Preference Approach): More recently, economists have tried to introduce and emphasize the influence of monetary factors into the issue of interest. In reality, though, the emphasis here begins to be switched from why interest is paid to what determines the prevailing rate of interest. Interest in liquidity preference theory is reduced to nothing more than a risk-premium against fluctuations about which we are not certain. It leaves interest suspended, so to speak in a void, there being interest because there is interest, thus providing justification of interest without justifying. 11. Exploitation theory: Incidentally, socialist economists have always considered interest as nothing but exploitation. It should be recalled that the “founding fathers” of capitalist theory, Adam Smith and Ricardo, believed that the source of all value is nothing but labour. If that is true, then all payments should be made to labour and interest is nothing but exploitation. Zatzman and Islam (2007) have shown that interest is paid to an independent factor of production, which may be called either waiting or postponement or abstinence or use, however, they remain an obstacle to real economic growth that must precede real production. All theories justifying interest have failed to answer or to prove as to why interest is paid or should be paid to this factor. Some point to the necessity of waiting; others to the necessity of abstinence of postponement; but none of these explanations answer the question. Then, the big question becomes, ‘why are we adding value on something that has inherently no value in terms of production?’ Before answering, of course, all economists have move ahead with what kind of value can be given to information thus unleashing the works of a series of Nobel laureate economists. Even if interest is considered some kind of payment to a factor of production, it has some unique characteristics that set it apart from payments to any other factor of production. Due to its unique nature, it leads to some very disturbing results.

First, interest leads to an inequitable distribution of income. This can be seen by taking an example of three people. Suppose there are three people who consume of all of their income in a given year yet one of them starts with $1,000 in savings, a second with $100 and a third with zero. At 10% interest per annum, by the end of the year, the first person will have $1,100, the second $110 and the third person zero in their accounts. If the same scenario follows in the next year, the first person will have $1,210, the second $121 and the third will have zero. Already, one can see how the distribution between them grows every year, even between the one who has some savings of his own. This scenario will be made even worse if the richest person will also to be able to add savings. Suppose he adds one thousand at the end of each year. He will have 1,100 at the end of the first year, he adds $1,000 and continues with his 10% interest and he will have $2,310 at the end of the second year, and so on. Now it is one thing if this money paid was actually due to some positive factor of production but in reality, one cannot make that argument in this case. The money that the people are making via interest may have been squandered, lost or even stolen by the people who borrowed it, but one still has to be pay the interest. It may have been invested in a completely losing project and therefore it actually did not produce anything. But all of that does not matter, it has to be paid regardless of whether that “factor of production” produces anything or not. This is simply one of the unique aspects of money and payments to money. No one can argue that this is just and therefore its results are an inequitable distribution of money. Furthermore, the distribution of income becomes more and more skewed over time. Without production, one party accumulates wealth whereas with all the labour the other struggles to pay for the interest that compounds every year. As we have seen often, a ‘house owner’ ends up owning only 10% of the house and continue to pay many times over while the bank collects interests without engaging in economically productive activity. Even though it is trivial that inequality is counterproductive in economic terms, someone could still ask as to whether an inequitable distribution of income should be considered a major issue. Besides the psychological effects on the poor, especially given mass media advertising that emphasizes the good life and the need to consume, there are very important effects on the market as a whole. In a market economy, production will be geared towards those who have the money to pay for the output, regardless of how necessary other goods may be for society. If the rich desire, demand and are willing to pay a lot of money for SUVs and gas-guzzling vehicles, those will be produced, regardless of what research results indicate. As the income distribution becomes more and more skewed, more and more resources will be devoted to the demands of the richer classes. Since resources are somewhat “fixed,” this means that less and less will be devoted to the needs of the poorer classes. Furthermore, the lesser resources devoted to the goods that the poor consume reduces supply and drives up the prices of those goods, further harming the poor people’s overall economic situation. For example, one can find numerous medical clinics catering to the rich (those who can afford such treatments), even if they are far from necessary, such as numerous places for cosmetic surgery and the like. At the same time, one may find it very difficult to find clinics catering to the poor and meeting their basic needs. If they could pay more for those essential services, in a market driven economy, one would definitely find more of those types of clinics, more resources devoted to those

needs and a cheaper price in the long-run for what they need. In addition, the burden of interest upon the poor who fall into debt puts them into a situation where they cannot advance socially or economically, widening the gap between the rich and the poor. Debt itself is a difficult situation for any individual. However, it is interest payments that make one’s debt a moving target, many times one that an individual simply cannot keep up with. Again, it is a bogus factor of production but it works to allow the rich to get richer while putting a great burden upon those who fall into debt. As shown in Figure 6.4, the core of this discrepancy resides in the fact that the basis of interest’s existence is aphenomenal and thus it is inherently implosive. The current debt situation, with the major role that interest is playing in it, is potentially very devastating for the world as a whole. In Global Trends 2015, the Central Intelligence Agency (CIA) recognized (Hertz, 2004): The rising tide of the global economy will create many economic winners, but it will not lift all boats. [It will] spawn conflicts at home and abroad ensuring an ever-wider gap between regional winners and losers than exists today. [Globalization’s] evolution will be rocky, marked by chronic financial volatility and a widening economic divide. Regions, countries and groups feeling left behind will face deepening economic stagnation, political instability and cultural alienation. They will foster political, ethnic, ideological and religious extremism, along with the violence that often accompanies it. Hertz (2004) details how debt is destroying the developing world. and threatening us all, delineating many of the dangers that the massive debt, confirming Figure 6.4. She writes: Debt’s ugly progeny? poverty, inequality, and injustice are also called upon to justify, and even legitimize, acts of the greatest violence. Only a few weeks after the World Trade Center was attacked, leading African commentator Michael Fortin wrote: “We have to recognize that this deplorable act of aggression may have been, at least in part, an act of revenge on the part of desperate and humiliated people, crushed by the weight of the economic oppression practiced by the peoples of the West.” Fortin’s language? “crushed,” “oppression,” “desperate,” “humiliated” is deliberately evocative. And it is manifestly clear that there is an audience with whom such words powerfully resonate. (Hertz, 2004).

6.7 The Optimum Lifestyle Economics is all about a society. Every society comprises of individuals each of whom would have personal conviction a set of values. Islam et al. (2015) discussed how lifestyle in ancient Greek was in conformance with sustainable lifestyle. It was during the introduction of control of Roman Catholic Church and the concept of ‘original sin’ that collective lifestyle took a downward turn. The concept that humans are born perfect although they have flaws was replaced by the concept of ‘original sin’ that saw humans as inherently selfish. Also replaced was the notion of having a higher purpose of life with the notion of ‘repent’ and pay off for sins committed, without any accountability of the long-term consequences. Aristotle predates the

‘original sin’ dogma and it is not surprising that human ‘flaws’ do not form the core of Aristotle’s philosophy of humanity. The way Aristotle connects the purpose of life to long-term success, it becomes clear that he endorsed the long-term approach. This is in line with what we know of Confucianism or Buddhism. However, New science does not recognize the fact that Islam, one of the three so-called ‘monotheist’ religions is all about the long-term approach. In fact, as discussed in previous sections, Qur’an (79:38) talks about the short-term approach (spiraling downward) as the one to the life of eternal torment. Aristotle repeatedly gives reference to underlying objective of all actions, the objective being ‘eudaimonia’ (Greek: εύδαιμονία). This word is commonly translated as happiness or welfare; however, «human flourishing» has been proposed as a more accurate translation. Etymologically, ‘eudaimonia’ consists of the words “eu” (“good”) and “daimōn” (“spirit” or “soul”). This objective is central to the concept of morality and ethics in Aristotelian work that uses this objective as the impetus “aretē’, most often translated as “virtue” or “excellence”, and “phronesis”, often translated as “practical or ethical wisdom”. In Aristotle’s works, eudaimonia was (based on older Greek tradition) used as the term for the highest human good, and so it is the aim of practical philosophy, including ethics and political philosophy, to consider (and also experience) what it really is, and how it can be achieved. Once again, this is entirely consistent with the Quranic narration of virtues and ‘sins’. For instance, Qur’an 49:13 states that the source of righteousness is ‘taqwa’, a term used to show consciousness of the creator and the creator’s prescribed purpose designated for mankind, who can then seek guidance from the Qu’ran (2:2). This stance for humanity and purpose of life is starkly different from Christian dogma that sees a child in a state of sin and although people can commit further sins, there is no criterion that dictates the nature of virtues. One the other hand, both Hinduism and Buddhism retained the original concept that has been supported by the Qur’anic narrative (Islam et al., 2016; Islam, 2017; 2018). They both recognize that every action is itself a phenomenon forming part of the objective conditions. Every action is, however, preceded by an intention that forms part of the subjective conditions. For instance, the relationship between chetna (inspiration) and karma (deed) was outlined in the Mahabharat and in the scripts of Buddha. The most famous saying of the Prophet Muhammad—and the first cited in the collection of Bukhari’s collection of Hadiths —is that any deed is evaluated based on its intention (niyah in Arabic). A review of human history reveals that the perpetual conflict between good and evil has always been about opposing intentions. The good has always been characterized by the intention to serve a larger community, thereby assuring the long-term of an individual, while evil has been characterized as the intention to serve self-interest. This is entirely consistent with Aristotle’s take on humanity. Aristotle also subscribed to the notion that nature is perfect, and both animate and inanimate objects function perfectly. It follows that the tangible part of humans fall under the universal order and is independent of their control. However, humans also have intellect or rationality. It is this rationality and its practical application that make humans unique. Humans are capable of drawing upon their experience and blend with their inherent qualities, such as temperance, courage, sense of justice and numerous other virtues. Creating a balance between various

extremes is the objective of a successful person. In his words, these virtues are “the mean between the extremes.” A life of virtue is the ideal for human life. This is entirely consistent with Islamic standard of public governance and rule of engagement in foreign policies (Islam, 2016). In contrast to the state of virtue, comes the state of vice, which necessarily involves a shortterm approach. Plato as well as Aristotle understood this ‘vice’ as something driven by desire, which is inherent. It is not because of the propensity to sin (similar to what is stated as ‘original sin’), it is rather because humans have this inherent weakness to take the short cut, which leads to deciding on a short-term approach. Qur’anic principles describe this notion as being a test, which is inherent to the purpose of life for all humans. Figure 6.5 shows how Islam strikes a balance between totalitarianism and individualism, as the former represents collective morality while the latter represents individual morality. Because the two values often collide due to conflict of self-interest and collective interest, there must be a balance struck. This can be done with optimized lifestyles. Every such test has both individual and political component in it. The following figure shows how the balance between individual liberty and regulatory control is made. At the two extremes of the Qur’an. At the end, what we have is optimization of two contrasting trends. If regulatory control is increased, one is not expected to have any accountability and a test loses its meaning. On the other hand, if individual liberty is excessive, then it leads to anarchy and, at the same time, accountability skyrockets, making it impractical for humans to survive the tests with their limited ability. The intersection of these two graphs represents the optimum that in Aristotle’s word is the ‘middle of the extremes’ and ‘ummatan wassata’ (the nation of the middle path) in the Qur’an.

Figure 6.5 Islamic society finds an optimum between individual liberty and regulatory control.

6.7.1 Static and Dynamic Intention If intention has to be the root of any action, it cannot be elusive, evasive, or fleeting. It must be held constant. If intention has to be the source of accountability, it also has to be immutable and truly intangible to others. Such form of ‘intention’ is absent in English language. In Arabic, on the other hand, there are two words for intention. One (Niyah) is static and constant after it

appears. It is the root of all actions and the literal meaning is ‘real objective’. Two things are known about this word. First, humans will be judged by the niyah of an action (first Hadith in the book of Bukhari). Secondly, niyah sets direction. There is an Arabic proverb that says, niyah sets the direction (of the saddle). Scientifically, it means if niyah (root) is real, subsequent events will also be phenomenal or in conformance with the niyah. This is the definition of real, it must have a real root (Islam et al., 2010; Khan and Islam, 2012; 2016). However, having a real root is not a sufficient condition for phenomenal destination/outcome. The pathway or subsequent steps must also have real roots, every time there is a bifurcation point. Because these branches already have an original root, each branching cannot have a different root. Another word is needed to describe that process of bifurcation. In Arabic language, there is a dynamic intention. It is called qsd. This word is the root word for economics (in economizing sense) and is used for dynamic optimization. So, for an action to be phenomenal, it must have a real niyah and all subsequent branching must have qsd that is aligned with the niyah. This point is illustrated in the following example. Consider you are going to the airport. Before you leave, your niyah is to go to the airport. This is static intention. You are turning left on a traffic light, your qsd is to avoid delay. This is dynamic intention and is in line with niyah, hence, phenomenal. Then, you are stopping at a coffee shop. Your phenomenal qsd is to make sure you don’t fall asleep on the road. Why is it phenomenal? It’s because your Niyah is still the same. However, if you stopped at a bar and started to drink or watch porno in an internet cafe, your qsd is not in line with your niyah. This aphenomenal qsd is not optimized and the entire process risks becoming aphenomenal. This process of optimization is called iqtisad, the term used for ‘economics’ in Arabic. This is the essence of ‘intention-based’ economics (Zatzman and Islam, 2007) as well as ‘intention-based’ science (Khan and Islam, 2012; 2016). It turns out that every day, we go through hundreds of thousands of such decision points. However, for an action to be absolutely phenomenal, we must meet the conditions at every juncture. That’s why it is impossible for any human to be perfect or to complete even one action that’s 100% in tune with original intention, or niyah. This fact is supported by the meaning of the word insan (human in Arabic) that literally means, adorable, forgetful, and trusting all in one. Note that ‘perfect’ is the first status of a child and all humans are born with these child-like qualities. The word ‘trust’ is iman in Arabic. This is often mistranslated as ‘belief’ or ‘faith’. Therefore, when scientists and theologians mention about humans being hard wired to ‘believe’ or ‘have faith’, they are contradicting their own respective theories of ‘born sinner’, ‘blank slate’, ‘defective gene’. Another reason for a human’s inability to complete a task perfectly is that his tangible actions are part of universal order and he has no control over it. It suffices to say that both niyah and qsd are disconnected from the tangible action and contribute only to the infinitely long term of the individual. These actions are infinitely beneficial to them if their niyah and qsd’s are phenomenal and infinitely harmful if they are not. Figure 6.6 shows how a phenomenal intention sets the direction of subsequent cognition. This is equivalent to taking the approach of obliquity. The aphenomenal niyah (in Arabic it is called Ammaniya, which means ‘wishful thinking’), on the other hand represents the myopic approach that drives the aphenomenal model (see Figure 6.7).

Figure 6.6 Good intention launches off knowledge-based model where as a bad intention throws off the cognition to ignorance and prejudice.

Figure 6.7 Intentions are the driver of sustainability. Individually, each of the decision point reflects a point of singularity. The first decision point (left origin) is the site where a phenomenal static intention (Niyah) would launch the upward moving graph. Static intention (Niyah) is the singularity point because It is the only freedom humans have and as such it is the only control they can exert and as such it is the only thing they can be held accountable for. This is crucial because intention cannot be controlled by perception, which can be manipulated. A phenomenal intention means the action is motivated by pleasing the creator, who set the purpose of life. What does this mean in practical terms? It means one would not resort to violating anyone’s right, including his/her own. No immoral motivation (for instance, greed, wasteful habits, lust) is a starting point. Aphenomenal intention, on the other hand, is any other intention than ‘pleasing the creator’, including pleasing self. What does it mean? It means one would end up oppressing oneself if one resorts to do things out of selfish purpose. This is in sharp contradiction to what the Eurocentric economic models have long pontificated. With an aphenomenal static intention, one starts to move downward. This is the pathway that instigates desire, greed, and all other very short-

term drivers of economy. As further decisions are made, the pathway for dynamic intention (qsd) opens up. Any person, still have an opportunity to return to the path of sustainability and for that he/she has to return to the origin and start over on the upward path that has phenomenal intention attached to it. This backtracking is a moral act itself and will give long-term dividends. On the other hand, if a person continues to travel the downward path, arrogance sets in and the person continues to make economic decisions that are unsustainable. This ‘time factor’ is also closely associated with human conscience, which is the driver of intention. Only phenomenal intentions count toward the long-term future of an individual, while they have no bearing on the universal order. If one connects the definition of conscientious deed (‘Amal in Arabic, it is rooted in intention that is in conformance with universal order), it becomes clear that everything is connected through the time function. This word ‘Amal’ was what we presented in Chapter 3 as the labour that is conscience driven. The static intention, niyah, cannot be part of time function, because if it does, humans would have no control over it and therefore, they would have zero freedom, thereby making accountability absurd and illogical. On the other hand, if 100% control and ownership of intention is given to intention, humans can be held accountable without being responsible for the universal order, which is clearly not under their control and remains independent of human intervention. This is the logic used in the Qur’anic claim that men will be judged for their Am’al and Hadith’s (first hadith in book of Bukhari) claim that humans will be judged by their niyah. It is of interest to note that there are two distinctly different words for deeds or actions. They are y’afal, action that is part of universal order but not associated with original intention, niyah; and y’amal, individual action that is connected to original intention (akin to conscientious act). Qur’an presents a clear directive to do y’amal. In fact, every time the word iman (trust – the original trait of a human) is mentioned, it is followed with well intended deed or similar directive connected to conscience. The task at hand for humans is to make sure their original intention is in conformance with universal order and their subsequent dynamic intentions also turned in the same direction as the original intention. The term turning is also the root word for heart (qlb) in Arabic. The word qsd is the root word for economics (as in economizing) in Arabic. That’s why Zatzman and Islam (2007) called scientifically consistent economics ‘intention-based’ economics. It is of interest to note that this model was first implemented by prophet Muhammad in early 7th century and it was brought to Europe by Muslim scholars, most notably by Averroes, who is known to be the ‘father of secular philosophy in Europe’. We adopt this model and call it the knowledge-based model, as opposed to the currently used model that finds its root in Thomas Aquinas, the ‘father of doctrinal philosophy’.

6.7.2 Role of Intention in Natural Economy Considered in its most general aspect, the universe comprising all phenomena can be comprehended as comprising two broad categories: the mechanical and the organic. Many mechanical phenomena can be found within the organic category. Certain aspects of many organically-based phenomena can be defined or accounted for entirely within the category that

comprises all forms of mechanism. Frequency, and its measurement, often appears to bridge this mechanical-organic divide. Organically-based frequencies have an operating range which itself varies, e.g., the length of the lunar year. On the one hand, purely mechanical frequencies also have an operating range, and this range can be set or otherwise manipulated up to a point, e.g., the resonant frequency at which a bridge structure may collapse in a sustained high wind. On the other hand, although organically-based frequencies can be detected and measured, there is usually little or nothing, beyond a very definite window that must be determined by trialand-error, that can be done to manipulate such frequencies. Problems arise when such frequency-based devices are treated as the generator of values for a variable that is treated as being independent in the sense that we take Newton’s fictional timevariable t to be varying “independently” of whatever phenomenon it is supposed to measuring/calibrating/counting. Outside of a tiny instantaneous range, e.g., the period in which ∆t approaches 0, naturally-sourced frequencies cannot be assumed to be independent in that way. This is a false assumption whose uncritical acceptance vitiates much of the eventual output of the measuring/calibration effort. Such problem arises the moment one makes the phenomenal assumption that frequency is fixed. That’s the idea behind the unit of ‘second’ for time (solar orbit to cesium radiation frequency). New science fixed the frequency (it’s like fixing speed of light), then back calculated time. No wonder that later on, time was made into a function of perception (relativity) thereby making the unique functionality schizophrenic. Not only is it the case that “such a problem arises the moment you make the phenomenal assumption that frequency is fixed.” Even if you allow that t is not fixed and undergoes changes in value, i.e., that its frequency is not necessarily fixed, this problem persists if the subtler but still toxic assumption is accepted that the rate at which the variable t changes — ∆t — is constant in some “continuous” interval over which the derivative df(t)/dt may be taken. Here is where we uncover the truly toxic power of Newton’s Laws of Motion over conscience-based consciousness. That’s when they invoke ‘known’ function, which itself is aphenomenal. The only function that is valid is with infinite order of periodicity (this is beyond chaotic). In order to conform to Nature, one must align the intention with the long-term direction of Nature. This is done by successively asking questions: All questions having dynamic intentions, I (t), aligned with the original intention, Io (original intention). In Arabic, the original (source) intention (before an action has commenced) has the root niyah. On the other hand, The root word of this time-function intention is qsd. This root has to at least two meanings, 1) intention after an action has began; 2) economize (more akin to optimizing, not to confuse with saving)4. Scientifically, a source intention is equivalent to saying, “my intention is to go to the airport’”. However, as the driving continues and the driver comes across a red light or a traffic jam or a detour, he says, “my qsd (dynamic intention) to turn is to avoid delay” (See Figure 6.8) Scientifically, intangibles are continuous time functions, ranging from 0, extending to infinity.

Zero here refers to source and infinity refers to end. In a yin yang process, this forms a duality and balance. The source of a human act is the intention, Io. The source of each of the subsequent bifurcation points is the dynamic intention, Id. Correct intentions at each decision point lead to de-linearized optimization of time, coinciding with total conformance with Nature and universal order. Because time is the dependent variable, this optimization also corresponds to both matter and energy, representing maximum economization, forming the basis for economic development, using the science of intangibles. If productivity is work/time, minimizing time required maximizes productivity. That’s why nature science approach is the most productive. The outcome of this process is automatic optimization, which translates into highest possible efficiency. This is demonstrated in Figure 6.8.

Figure 6.8 Niyah is original intention, whereas qsd is dynamic intention.

6.8 The Gold Standard for Sustainable Economy Figure 6.9 shows the final comparison between modern economy and Islamic economy. We have seen in Chapter 3 how the modern economic system, manipulated through both conservatives and liberals, has spiralled down to current state of economic extremism with tremendous socio-political crisis at both local and global levels. The only thing that separated the ‘left’ from the ‘right’ was hypocrisy. They are both disciples of Money, Sex, and Power Trinity, yet liberals managed to shed crocodile’s tear. They both consider anything other than Eurocentric look as ‘savage’ but package that long-discredited Orientalist hubris as some objective view. Both the progressive, US television personalities who are paraded on air as scientific experts, demonstrate their good will to the Islamic World by occasionally doing pieces on Andalusia and how, in their assessment, the Renaissance and Enlightenment owe a great debt to the Muslim world which, during centuries of European darkness, “tended the flame of ancient knowledge” and even “imported knowledge from India”, as if they were like today’s minimum wage security guards in America who make no contribution to what takes place in the hallowed halls they supposedly protect. Listening to these fraudulent experts peddle their scripted lines, one would think that the great thinkers of the Islamic world were mere librarians. True Islam, of course has a very different outlook. Islamic economy takes a very different approach and bases on the “Olympic model” (Lakhal et al., 2009; Lakhal et al., 2007) with 5 rings, symbolizing gold standard greed interest waste manipulation

Figure 6.9 Summary of Islamic economy vis-à-vis modern economy.

The gold standard anchors the wealth and pegs reality to the economy that cannot be manipulated. However, the economy cannot function unless zero greed, zero interest, zero waste, and zero manipulation is introduced at personal, national, and global levels. With that basis, economy can only grow exponentially. This is exactly what happened during the Rashedun Caliphate, much celebrated by Ibn Khaldun as the only standard for social justice. In this process, long-term approach is the defining principle in Islamic economy in sharp contrast to modern economy, which is patently myopic. The zero manipulation is driven by the premise that natural is 100% good. So, the only intervention is needed when this natural state is manipulated by a party that doesn’t have those five principles in its core. As such, government intervention is still in conformance with zero manipulation and in fact it is motivated by ensuring that zero manipulation is maintained. That period of Rashedun Caliphate, just like Caliphate of prophet Muhammad saw numerous incidents of wars, but not a single one was an offensive war, let alone war motivated by greed (Jaan Islam, 2016). The society was an epitome of socio/political justice (Islam et al., 2013) and economic growth and prosperity (Shatzmiller, 2011). 1 Frank Hyneman Knight

Born 7 November 1885 McLean County, Illinois Died 15 April 1972 (aged 86) Chicago, Illinois Institution Cornell University University of Chicago University of Iowa Field Risk theory Profit theory Value theory School or tradition Chicago School of Economics Alma mater Milligan College University of Tennessee Cornell University Doctoral advisor Allyn A. Young Alvin S. Johnson Doctoral students George Stigler Frank Hyneman Knight (1885 – 1972) was an American economist who spent most of his career at the University of Chicago, where he became one of the founders of the Chicago school. Nobel laureates Milton Friedman, George Stigler and James M. Buchanan were all students of Knight at Chicago. 2This is considered an authentic hadith (hasan, or ‘good’) by al-Albani.

3 “Agio” is the premium which one is willing to pay for the present goods as compared to

having the same goods in future. 4 The word economics in Arabic indeed is based on the root word, qsd. Karl Marx identified

the role of intention in socio-economic development. However, his use of the word ‘intention’ is not likely to be the original intention.

Chapter 7 Framework of Economics of Sustainable Energy 7.1 Introduction After the financial collapse of 2008, it has become clear that there are inherent problems associated with the current economic world order. The modern civilization has not seen many economic models. In fact, the work of Karl Marx was the only notable economic model that offered an alternative to the only other model available to the Europe-centric culture that dominated the modern age. After the collapse of the Soviet Union, the world was left with only one economic model. Therefore, when the financial collapse of 2008 took place, it created panic since it became clear that the current economic model was not suitable for the Information Age. The same problem occurred in all aspects of modern civilization, ranging from engineering to medical science. This chapter focuses on economic models. In previous chapters, we have examined the root causes behind the collapse of economic models. We also re-examined the first premises of all theories and laws for both economics and technology development. If the first premise does not conform to natural laws, then the model is considered unreal (not just unrealistic) – dubbed an “aphenomenal model.” With these aphenomenal models, all subsequent decisions lead to outcomes that conflict with the stated “intended” outcomes. At present, such conflicts are explained either with doctrinal philosophies or with a declaration of paradox. Our analysis showed that doctrinal philosophy being an aphenomenal science (emerging from false and illogical first premise and/or illogical extrapolation) is the main reason for the current crisis that we are experiencing. The statement of a paradox helps us procrastinate in solving the problem, but it does nothing to solve the problem itself. Both of these states keep us squarely in what we call the Einstein box. (Albert Einstein famously said, “The thinking that got you into the problem, is not going to get you out.”) Instead, if the first premise of any theory or “laws” were to be replaced with a phenomenal premise, the subsequent cognition encounters no contradictions with the intended outcomes. The end results show how the current crises cannot only be arrested but how it can also be reversed. As a result, the entire process would revert from being unsustainable to sustainable. The sustainable development model that closes this vicious loop that involves this sequence is: unsustainable engineering → technological disaster → environmental calamity → financial collapse. This sequence has become exposed since the dawn of the Information Age.

7.2 Delinearized History of Modern Age Imagine traveling inside a sealed train looking out at the world you are passing by, insulated completely and infinitely into the future from the effects of time as well as cut off from all direct contact with immediate surroundings, i.e., the outside environment. All communications reaching you inside the train assert and reassure the following to be “true”:

1. ‘Refining’ millions of barrels of crude oil daily by incinerating the liquid portion sustains an entire lifestyle, complete with plastic durables produced for every conceivable application including infant pacifiers and containers for synthetic nourishing “formula”; 2. cleanliness being next to godliness, dirt is declared the Essence of Unclean, against which an endless war is to be waged by such saviors as King Carbon of Tetrachloride, Prince Benzene et al.; 3. the alimentary needs of the planet are secured through “Green Revolutions” that replace organically natural with entirely synthetic chemical fertilizers, until such time as organic farming could be converted into something highly profitable in the developed countries while the peoples of developing countries, rendered landless by the previous Green Revolution, are prostrate and literally rioting to obtain more chemical fertilizers; 4. severe reduction in the ‘best creation of God’ is done by ‘the Sex Revolution’ that was intended to ‘liberate’ women and by ‘Family planning’ which was to increase sustainability of families; 5. beauty can be not only in the eye of the beholder but available in increments carefully differentiated by income as to the amount of gloss and fragrance that are extracted from toxic chemicals such as formaldehyde (lipstick), butane (body spray), Titanium oxide (face cream), etc. We could go on: the list is endless, but, for example, the basis for the sustainable lifestyle of point #1 above is refining based on adding numerous toxic chemicals. The crude oil is plant material that is millions of years old. Incineration takes place by way of environmentally damaging internal-combustion engines and the polluting effects of a power supply system that not only generates electricity in a severely polluting manner but produces alternating current which is utterly alien to the natural environment. The chemical constituents of the pacifier and formula bottle are utterly toxic, while the defense of their use at room temperature as being chemically inert blithely ignores the evidence of out-gassing and low-temperature pyrolysis that every plastic is undergoing in the presence of oxygen at standard conditions. As for point #2, dirt in the sense of finely ground soil particles, or wood ash, is the greatest natural cleaning agent, with an environmental toxicity of 0. As far as the environment is concerned, the “fertilizers” of point #3 that push quicker plant growth or higher crop yields are just so many more toxic byproducts of petroleum refining. As for point #4, ‘Family planning’ would be behind the disappearance of all families and family values in the west and in the ruling class of the rest of the world. Today, everyone is convinced that human beings may be the greatest creation of God, but high concentration of this human being is the only reason for poverty of the third world! As for point #5, the “beautifying agents” mentioned in point #5: the butane in body spray and the formaldehyde in lipsticks only beautify the wallet of energy companies engaged in selling their toxic byproducts in name of cosmetics. None of these is a mistake. Each stands 180-degrees in opposition to the needs of society to preserve personal health or the environment in the short term and the long term. Note the reason why this is the case: because it does NOT PROVIDE a teleological explanation that

allows one to differenciate between sustainable and unsustainable economic activities. This is what Aristotle aimed at and what Islamic scholars achieved. You HAVE to drill the connection between economic sustainability and Islamic economics into the reader’s head, because they seem like different concepts to the average reader! It is socially unjust, economically unjust, and few if any of those with a seat inside the sealed train are aware or capable of grasping any portion of their own responsibility. Few, if any of the rest of us outside the sealed train, who have developed or acquired a point of view on these matters, have not already become so thoroughly “educated” to observe Nature and the environment from a stationary position outside it that we, too, are barely aware of the existence or passage of the sealed train or its cargo and have the greatest difficulty grasping how the future and the long term are entirely prefigured in everything taking place right now in the present of the apparently short-term. Within today’s so-called “energy crunch”, we are all of us living a very, very Big Lie. What more lies ahead? The following brief catalogue should supply more than a hint: “human growth hormone”; pharmaceutical medicines that maintain but do not cure many conditions for which known herbal cures are banned from the market; papal bills of infallibility (provided by and in the name of official Science) for chlorination of water systems to the exclusion of any other method of sustaining potable supplies, for chemical pasteurization of milk to the exclusion, legally enforced, of any other approach to managing bacterial growth; not to mention an even lengthier sub-catalogue of social, political and economic “my way or the highway” impositions on the public by those holding down privileged positions aboard that sealed train. The following table (Table 7.1) shows the lies involved in many of the modern-day breakthrough technologies. Table 7.1 Some “breakthrough” technologies (From Khan and Islam, 2016). Product

Microwave oven Fluorescent light (white light) Prozac (the wonder drug) Antioxidants RU-486 (the abortion

Promise (knowledge at t = ‘right now’ Instant cooking (bursting with nutrition) Simulates the sunlight and can eliminate ‘cabin fever’ 80% effective in reducing depression Reduces aging symptoms ‘harmlessly’ avoid pregnancy while

Current knowledge (closer to reality)

97% of the nutrients destroyed; produces dioxin from baby bottles Used for torturing people, causes severe depression

Increased suicidal behavior is amongst the large list of side-effects; original effectiveness of this drug class is scrutinized by later medical trials and findings Gives lung cancer Caused death and increases the chances of contacting STD

pill) Vioxx

liberating women Best drug for arthritis pain, no side effect

Viagra

Impotent man can go on for ever

Coke

Refreshing, revitalizing Transfat Should replace saturated fats, incl. high-fiber diets Simulated Improve the wood, appearance of plastic gloss wood Cell phone Empowers, keep connected Chemical Keeps young, gives hair colors appeal Chemical Increases crop fertilizer yield, makes soil fertile Chocolate Increases human and ‘refined’ body volume, sweets increasing appeal Pesticides, Improves MTBE performance Desalination Purifies water Wood Improves durability paint/varnish Leather Adds gloss and technology durability Freon, Replaced ammonia aerosol, etc. that was ‘corrosive’

Increases the chance of heart failure

Causes death and blindness Dehydrates; used as a pesticide in India, primary cause of stress and obesity. Primary source of obesity and asthma

Contains formaldehyde that causes Alzheimer

Gives brain cancer, decreases sperm count among men. Gives skin cancer Harmful crop; soil damaged

Increases obesity epidemic and related diseases

Damages the ecosystem Necessary minerals removed Numerous toxic chemicals released Renders them brittle and short-living Global harms immeasurable and should be discarded

The essence of Eurocentric culture lies in promotion of a top-down model by its various Establishments (Church, State, Corporate) since medieval times. This unstable model, based on self-interest, short-term gain, and tangible benefit, leads to real chaos. It is inherently implosive, destined to shrink to negative infinity yet likely to cause tremendous damage to

mankind on the way to its final demise. The damage occurs in ways that amount to converting what is good or functional into something evil or dysfunctional – while countering critics en route with sage-like observations about “all Progress coming at a price …”. Consider here for the moment, then, some clear examples from everyday living of this phenomenon – amassing profits and tangible benefits for a select few in the short term by converting some of the greatest gifts into some of the greatest poisons and hazards, while the original promise was exactly the opposite to the outcome: Islam et al. (2010a, 2010b) introduced the HSS®A® (Honey → Sugar → Saccharin® → Aspartame®)—a pathway that govern all aspects of the modern age. The HSS®A® pathway is a kind of metaphor for many other things that originate from natural metaphors. The following discussion lays out how it works. Once this metaphor is understood, the driving principles behind the modern age are clearly identified. With that, it becomes possible for one to figure out a way to reverse the process that has been characterized by modern scholars as disastrous. Over the years, it has become a common idea among engineers and to the public to associate an increase in the quality, and/or quality, of a final product with the insertion of additional intermediate stages of refining the product. If honey – taken more or less directly from a natural source, without further processing – was fine, surely the sweetness that can be attained by refining sugar must be better. If the individual wants to reduce their risk of diabetes, then surely further refining of the chemistry of “sweetness” into products, such as Saccharin® must be better still. And why not use even more sophisticated chemical engineering to further convert the chemical essence of this refined sweetness into forms that are stable in the liquid phase, such as Aspartame®? Zatzman and Islam (2007) detailed the following transitions in commercial product development and argued that the transitions amount to an increased focus on tangibles in order to increase the profit margin in the short term. The quality degradation is obvious, but the reason behind such technology development is quite murky. At present, the science of tangibles is incapable of lifting the fog out of this mode of technology development. Figure 7.1 is an example of how unsustainable can be turned into sustainable. As further refining and ‘value’ addition is performed, the emission of toxic gases skyrocket and the liquid itself become infested with toxic chemicals. Consider the additives added during the refining of crude oil.

Figure 7.1 Documenting pathways by which intangible natural gifts are destroyed by being converted into tangibly valuable commodities.

7.2.1 The Honey-Sugar-Saccharin-Aspartame Degradation in Everything HSS®A® is the most notorious accomplishment par excellence born of engineering, based on New Science, allowing transition from natural to artificial and simultaneously hiding all trails of characteristic time. The HSS®A® label for this pathway generalizes the seemingly insignificant example of the degradation of natural honey to carcinogenic “sweeteners” like Aspartame® because, as Albert Einstein most famously pointed out, the environmental challenges posed by conventional suppression or general disregard of essential phenomena in the natural environment such as pollination actually threaten the continued existence of human civilization in general. The HSS®A® pathway is a metaphor representing many other phenomena and chins of phenomena that originate from a natural form and become subsequently engineered through many intermediate stages into “new” products. In the following discussion, we lay out how it works. Once it is understood how disinformation works, one can figure out a way to reverse the process by avoiding aphenomenal schemes that lead to ignorance packaged with arrogance. Khan and Islam (2012; 2016) argue that the transition from honey has been deliberate with the clear intent of profiteering at the expense of the environment. Table 7.2 lists various components in nature and how they have been polluted to the extent of crisis while profit margin skyrocketed.

Table 7.2 The transition from natural to artificial commodities, and the reasons behind their transition. Original natural components with high value Air Crude oil

Final engineering product with very negative value Cigarette smoke Refined oil

Natural gas

Processed gas

Water

Tomato

Soft drinks, carbonated water, sports drink, energy drinks Ketchup

Egg

Mayonnaise

Corn, potato, etc. Chips, corn flakes

Milk

Ice cream, cheese cake

Driver of the technology (artificial product)

Profit of tobacco processing (e.g. nicotine) Profit of refining and chemical processing (chemicals, additives, catalysts, etc.) Profit of chemical companies (MEA, DEA, TEA, Methanol, glycol, etc.) Profit of chemical companies (artificial CO2, sugar, saccharine, aspartame, sorbitol, synthetic ‘nutrients’, etc.) Profit to the manufacturer and chemical companies (sugar, additives, preservatives, etc.) Profit to the manufacturer and chemical companies (sugar, additives, preservatives, etc.) Profit for the manufacturers and chemical companies (transfat, sugar, additives, vitamins, nontransfat additives, etc.) Profit for chemical companies and manufacturers (sugar, no-sugar sweeteners, flavors, vitamins, additives, enzyme replacements, etc.)

Ever since the introduction of the culture of plastic over a century ago, the public has been indoctrinated into associating an increase in the quality, and/or qualities, of a final product with the insertion of additional intermediate stages of ‘refining’ the product. If honey – taken more or less directly from a natural source, without further processing – was fine, surely the sweetness that can be attained by refining sugar must be better. If the individual wants to reduce their risk of diabetes, then surely further refining of the chemistry of “sweetness” into such products as Saccharin® must be better still. And why not even more sophisticated chemical engineering to further convert the chemical essence of this refined sweetness into forms that are stable in liquid phase, such as Aspartame®? As we “progress” from honey to sugar, the origin remains real (sugar cane or beet), but the process is tainted with artificial inputs, starting from electrical heating, chemical additives, bleaching, etc. Further “progress” to Saccharin® marks the use of another real origin, but this time the original source (crude oil) is a very old food source compared to the source of sugar. With steady-state analysis, they both appear to be of the same quality.

As the chemical engineering continues, we resort to the final transition to Aspartame®. Indeed, nothing is phenomenal about Aspartame®, as both the origin and the process are artificial. So, the overall transition from honey to Aspartame® has been from 100% phenomenal to 100% aphenomenal. Considering this, what economic calculations are needed to justify this replacement? It becomes clear, without considering the phenomenality feature, that any talk of economics would only mean the “economics” of aphenomenality. Yet this remains the standard of neo-classical economics. Throughout the modern era, economics has remained the driver of the education system. The matter of intention is not considered in the economics of scale, leading to certain questions never being answered. No one asks whether any degree of external processing of what began as a natural sugar source can or will improve its quality as a sweetener. Exactly what that process, or those processes, would be also remains unasked. No sugar refiner is worried about how the marketing of his product in excess is contributing to a diabetes epidemic. The advertising that is crucial to marketing this product certainly will not raise this question. Guided by the “logic” of the economies of scale, and the marketing effort that must accompany it, greater processing is assumed to be and accepted as being ipso facto good, or better. As a consequence of the selectivity inherent in such “logic,” any other possibility within the overall picture – such as the possibility that as we go from honey to sugar to saccharin to aspartame, we go from something entirely safe for human consumption to something cancerously toxic – does not even enter the frame. Such a consideration would prove to be very threatening to the health of a group’s big business in the short term. All this is especially devastatingly clear when it comes to education and natural cognition. Over the last millennium, even after ‘original sin’ has been discredited as aphenomenal, it is widely and falsely believed that natural cognition is backward looking and humans are incapable of finding their own path of knowledge, they must be indoctrinated into being enlightened. Edible natural products in their natural state are already good enough for humans to consume at some safe level and process further internally in ways useful to the organism. We are not likely to consume any unrefined natural food source in excess. However: the refining that accompanies the transformation of natural food sources into processed-food commodities also introduces components that interfere with the normal ability we have to push a natural food source aside after some definite point. Additionally, with externally processed “refinements” of natural sources, the chances increase that the form in which the product is eventually consumed must include compounds that are not characteristic anywhere in nature and that the human organism cannot usefully process without stressing the digestive system excessively. After a cancer epidemic, there is great scurrying to fix the problem. The cautionary tale within this tragedy is that, if the HSS®A® principle were considered before a new stage of external processing were added, much unnecessary tragedy could be avoided. There are two especially crucial premises of the economies-of-scale that lie hidden within the notion of “upgrading by refining:” (a) unit costs of production can be lowered (and unit profit therefore expanded) by increasing output Q per unit time t, i.e., by driving ∂Q/∂t (temporal

rate of change of Q) unconditionally in a positive direction; and (b) only the desired portion of the Q end-product is considered to have tangible economics and, therefore, also intangible social “value,” while any unwanted consequences – e.g., degradation of, or risks to, public health, damage(s) to the environment, etc. – are discounted and dismissed as false costs of production. Note that, if relatively free competition still prevailed, premise (a) would not arise even as a passing consideration. In an economy lacking monopolies, oligopolies, and/or cartels dictating effective demand by manipulating supply, unit costs of production remain mainly a function of some given level of technology. Once a certain proportion of investment in fixed-capital (equipment and ground-rent for the production facility) becomes the norm generally among the various producers competing for customers in the same market, the unit costs of production cannot fall or be driven arbitrarily below a certain floor level without risking business loss. The unit cost thus becomes downwardly inelastic. The unit cost of production can become downwardly elastic, i.e., capable of falling readily below any asserted floor price, under two conditions: a. during moments of technological transformation of the industry, in which producers who are first to lower their unit costs by using more advanced machinery will gain market shares, temporarily, at the expense of competitors; or b. in conditions where financially stronger producers absorb financially weakened competitors. In neoclassical models, which all assume competitiveness in the economy, this second circumstance is associated with the temporary cyclical crisis. This is the crisis that breaks out from time to time in periods of extended oversupply or weakened demand. In reality, contrary to the assumptions of the neoclassical economic models, the impacts of monopolies, oligopolies, and cartels have entirely displaced those of free competition and have become normal rather than the exception. Under such conditions, lowering unit costs of production (and thereby expansion of unit profit) by increasing output Q per unit time t, i.e., by driving ∂Q/∂t unconditionally in a positive direction, is no longer an occasional and exceptional tactical opportunity. It is a permanent policy option: monopolies, oligopolies, and cartels manipulate supply and demand because they can. Note that premise (b) points to how, where, and why consciousness of the unsustainability of the present order can emerge. Continuing indefinitely to refine nature out by substituting ever more elaborate chemical “equivalents,” hitherto unknown in the natural environment, has started to take its toll. The narrow concerns of the owners and managers of production are at odds with the needs of society. Irrespective of the private character of their appropriation of the fruits of production, based on concentrating so much power in so few hands, production has become far more social. The industrial-scale production of all goods and services as commodities has spread everywhere from the metropolises of Europe and North America to the remotest Asian countryside, the deserts of Africa, and the jungle regions of South America. This economy is not only global in scope but also social in its essential character. Regardless of the readiness of the owners and managers to dismiss and abdicate responsibility for the environmental and human health costs of their unsustainable approach, these costs have

become an increasingly urgent concern to societies in general. In this regard, the HSS®A® principle becomes a key and most useful guideline for sorting what is truly sustainable for the long term from what is undoubtedly unsustainable. The human being that is transformed further into a mere consumer of products is a being that has become marginalized from most of the possibilities and potentialities of the fact of his/her existence. This marginalization is an important feature of the HSS®A® principle. There are numerous things that individuals can do to modulate, or otherwise affect, the intake of honey and its impacts. However, there is little – indeed: nothing – that one can do about Aspartame® except drink it. With some minor modifications, the HSS®A® principle helps illustrate how the marginalization of the individual’s participation is happening in other areas. What has been identified here as the HSS®A® principle, or syndrome, continues to unfold attacks against both the increasing global striving toward true sustainability on the one hand, and the humanization of the environment in all aspects, societal and natural, on the other. Its silent partner is the aphenomenal model, which invents justifications for the unjustifiable and for “phenomena” that have been picked out of thin air. As with the aphenomenal model, repeated and continual detection and exposure of the operation of the HSS®A® principle is crucial for future progress in developing nature-science, the science of intangibles and true sustainability. Table 7.3 summarizes the outcome of the HSS®A® pathway: Table 7.3 The HSS®A® pathway and its outcome in various disciplines. Natural state

First stage of intervention Honey Sugar Education Doctrinal teaching Science Religion Science and nature-based New Science technology Value-based (e.g. gold, Coins (non-gold silver) economy or silver)

Second stage of intervention Saccharin® Formal education Fundamentalism Engineering Paper money (disconnected from gold reserve)

Third stage of intervention Aspartame® Computer-based learning Cult Computer-based design Promissory note (electronic)

The above model is instrumental in turning any economic process or chain of processes that are accountable, manageable and effective in matching real supply and real demand economic model into a model that is entirely perception-based. To the extent that these economic processes also drive the relevant engineering applications and management that come in their train, such a path ultimately “closes the loop” of a generally unsustainable mode of technology development overall.

7.2.2 What is the Most Insidious Disinformation?

The claim has always been that we are emulating nature. Yet, not a single functioning modern technology truly emulates the essence1 of nature and no economic theory of our time has come to terms with the reality of how economic life happens and how wealth accumulates, systematically, at one pole and poverty at another. Regardless of earnest protestations of intent to the contrary, not a single Eurocentric social system has acted or produced a result that can be called pro-humanity: non of its “democracies” empower people, none of its governments represent the electorate, none of its civil services serve the general public, and observations of nature by its scientists and in its most lavishly outfitted laboratories have rarely been translated into pro-nature technology development. Today, some of the most important technological breakthroughs have been mere manifestations of the linearization of nature science: nature is linearized by focusing only on its external features. Today, computers process information in a manner exactly opposite to how the human brain does. Turbines produce electrical energy while polluting the environment beyond repair, even as electric eels produce much higherintensity electricity while cleaning the environment. Batteries store very little electricity while producing very toxic spent materials. Synthetic plastic materials look like natural plastic, yet their syntheses follow an exactly opposite path. Furthermore, synthetic plastics do not have a single positive impact on the environment, whereas natural plastic materials do not have a single negative impact. In medical science, every promise made at the onset of commercialization has proven to be the opposite of what actually happened: witness Prozac®, Vioxx®, Viagra®, etc. Nature, on the other hand, did not allow a single product to impact the long-term negatively. Even the deadliest venom (e.g., cobra, poison arrow tree frog) has numerous beneficial effects in the long-term. This catalogue carries on in all directions: microwave cooking, fluorescent lighting, nuclear energy, cellular phones, refrigeration cycles to combustion cycles. In essence, nature continues to improve matters in its quality, as modern technologies continue to degrade the same into baser qualities. Nature thrives on diversity and flexibility, gaining strength from heterogeneity, whereas the quest for homogeneity seems to motivate much of modern engineering. Nature is non-linear and inherently promotes multiplicity of solutions. Modern applied science, however, continues to define problems as linearly as possible, promoting “single”-ness of solution, while particularly avoiding non-linear problems. Nature is inherently sustainable and promotes zerowaste, both in mass and energy. Engineering solutions today start with a “safety factor” while promoting an obsession with excess (hence, waste). Nature is truly transient, never showing any exact repeatability or steady state. Engineering today is obsessed with standards and replicability, always seeking “steady-state” solutions. Table 7.4 shows how all engineered processes are based on diametrically opposite principles of nature. What is true in engineering, is also true in Science (as a discipline in the Eurocentric modern age), Social science, Economics and practically all disciplines of the modern age, leading up to the Information age. They are all anti-nature, yet each of them was instituted with the false promise of establishing natural solutions to problems that were created by activities of the perpetrators or the Establishment.

Table 7.4 Natural processes vs. Engineered processes. Natural processes 1. Multiple/flexible 2. Non-linear 3. Heterogeneous

Engineered processes/synthetic 1. Exact/rigid 2. Linear 3. Homogenous/uniform

4. Has its own natural process 4. Breaks natural process 5. Recycles, life cycle 5. Disposable/one time use 6. Infinite 6. Finite 7. Non-symmetric 8. Productive design 9. Reversible 10. Knowledge 11. Phenomenal, sustainable 12. Dynamic/chaotic 13. No boundary

7. Symmetric 8. Reproductive design 9. Irreversible 10. Ignorance or anti knowledge 11. Aphenomenal, unsustainable 12. Static 13. Based on boundary conditions

7.2.2.1 Global Energy ‘Crunch’: The Disinformation Campaign Media disinformation has been the most important feature of modern-day communication. One of the drivers of disinformation is the energy sector, which funds a pretty large swath of big business. Consider the following CNN headline from March 18, 2006: Airs: March 18 and 19 at 8 p.m., 11 p.m. ET We Were Warned Tomorrow’s Oil Crisis What if a hurricane wiped out Houston, Texas, and terrorists attacked oil production in Saudi Arabia? CNN Presents looks at a hypothetical scenario about the vulnerability of the world’s oil supply, the world’s remaining sources of oil and explores the potential of alternative fuels. Poll: Most Americans fear vulnerability of oil supply Behind the Scenes: Powering the planet Watch: ‘Living on an illusion’ Watch: ‘Long war of the 24th century’ Watch: Are big cars irresponsible? Calculator: How much are you spending on gas? Gallery: Alternative fuel Without declaring out loud that “our leaders lied to us”, nevertheless other headlines trailing this particular series promo seem to imply that certain parts of the political disinformation are

recognized: WATCH: Former Secretary of State Colin Powell makes the case for war with Iraq to the United Nations. WATCH: While the mission in Iraq is declared a success no WMDs have been found. The question arises: if they got this one wrong, why should anyone believe they are getting “tomorrow’s oil crisis” right – especially if they start from the same vantage-point of linking the prospect of “crisis” with some inchoate “external terrorist threat”? The notion of a terrorist threat to Houston possesses a certain validity. However, the terrorists to watch out for do not speak Arabic – at least, not as a first language. Far from praying five times a day in the direction of Mecca, these fellows prey 24 hours a day, in whatever direction their guiding principle points them. That principle says that there is no god but monopoly and maximum is its profit. Consider Houston as a city already much involved in the energy business generally (not just production and refining or processing of fossil fuel), as well as a major hub of the Gulf Coast region. The recent evidence shows these terrorists have already done quite a job on the place and the prospects of its population. In 2001–2002, Enron, headquartered in Houston, “imploded”. Its collapse destroyed the livelihoods and life savings of tens of thousands of employees and their families. Over the preceding three years, Enron executives knowingly spread unjustified expectations about future operating revenues in order to attract ever more investment. At the same time, they manipulated legally required accounting reports so that disclosure of enormous losses was deferred as long as possible. The result was that, as a result of carefully and systematically exploiting vulnerabilities and loopholes created as a byproduct of “deregulating” electricity and natural gas rates in different jurisdictions, millions of natural gas customers in the State of California, the Canadian province of British Columbia and some other parts of North America were robbed. The murky and nefarious ways in which this robbery was carried out guarantee almost no possibility of ever recovering any portion of what was stolen. Some of the story is only just now becoming public, four years later, as part of the trial proceedings against the top executives of Enron. Enron CEO Ken Lay, now on trial for massive fraud, was a major fundraiser for the political campaigns of George W. Bush as governor of Texas, for the Republican nomination to the presidential ticket in 2000 and for the national election campaign that same year. The Bush Administration has been well aware of the enormous civil and likely criminal liabilities involved. Short of openly confronting either regulators or investigators at the state or federal levels – while taking care at the same time to ensure absolutely nothing was done about the tens of thousands whose lives Enron turned upside down – the White House has left no stone unturned to protect Enron and Lay as long and as far as possible. Since those accused of wrongdoing are in court being processed by the legal system, will CNN finds any terrorism here worth dramatizing? Houston is linked to the Gulf of Mexico by a short ship canal. The entire Gulf Coast region continues to reel from the effects of Hurricane Katrina in August 2005, which shattered the city

of New Orleans and severely damaged most of the coastline and towns between it and Biloxi/Gulfport, Mississippi. It is now known the Bush Administration was warned in advance of the likely destructiveness and of the government’s culpability in not taking appropriate protective steps to shore up the levee system around New Orleans in previous years. It is also known that, despite numerous desperate appeals from state and local officials dealing with the crisis occasioned by the hurricane’s effects, the federal government took no additional measures to ensure timely, effective or adequate response by available emergency-handling systems. At a time, thousands were dying unnecessarily and mostly from the failure of an overloaded and undersupplied emergency system coupled to preceding years of systematic neglect, unconditional and massive no-strings-attached offers of humanitarian aid, including more than 1,500 doctors from Cuba and free fuel supplies and emergency generators from Venezuela, were contemptuously spurned by the Bush Administration. Is CNN prepared to tell this part of the truth about the state terrorism of the Bush administration towards its own population? Exactly like the Enron crisis, the Katrina crisis exhibited once again the entire familiar catalog of symptoms of long-standing knowledge “at the top” of some persisting but unattended set of serious problems. About to be tested by an existential challenge posed by some short-term spike in external conditions, the moment of truth arrives … and the state fails. Throughout the present Bush Administration’s time in office, U.S. government leaders and media have been quick to label other countries “failed states”. However, by any of the most elementary criteria anyone might consider in identifying such a condition, the United States today stands as the world’s number one example of a failed state. It is “failed”, however, not for lack of tangible structures and institutions – the criterion applied to various places in Africa, Asia and Latin America to justify applying this label – but for lack of the political will required to ensure these institutions and processes serve people in a time of need. Is this not far worse? Will CNN address any of this reality, for which nobody voted or was ever asked to approve – or will they just carry on whipping up yet another round of mass psychoses about the alleged danger of “terrorists exploiting our democratic freedoms” to wreak further havoc?

7.2.3 HSS®A® Pathway in Economic Investment Projects Guided by the “logic” of the economies of scale, and the marketing efforts that must accompany them, greater processing is assumed to be and accepted as being ipso facto good; i.e. better. As a consequence of such selectivity inherent in such “logic,” any other possibility within the overall picture – such as the possibility that as we go from honey to sugar to saccharin to aspartame—we go from something entirely safe for human consumption to something entirely toxic – does not even enter the framework. Such a consideration would prove to be very threatening to the health of a group’s big business in the short-term. All of this is especially devastatingly clear when it comes to crude oil. Widely and falsely believed to be toxic before a refiner touches it, refined petroleum products are utterly toxic, but they are not to be questioned since they provide the economy’s lifeblood. In order to elucidate how the HSS®A® Pathway has affected modern life, an example is

provided from energy management. The first premise of “nature needs human intervention to be fixed” is changed to “nature is perfect,” Islam et al. (2012) presented detailed discussion on how this change in the first premise helps answer all questions that remain unanswered regarding the impacts of petroleum operations. It also helps demonstrate the false, but deeply rooted, perception that nuclear, electrical, photovoltaic, and “renewable” energy sources are “clean” and that carbon-based energy sources are “dirty.” They established that crude oil, being the finest form of a nature-processed energy source, has the greatest potential for environmental good. The only difference between solar energy (used directly) and crude oil is that crude oil is concentrated and can only be stored, transported, and re-utilized without resorting to HSS®A® degradation Of course, the conversion of solar energy through photovoltaics creates technological (low efficiency) and environmental (toxicity of synthetic silicon and battery components) disasters (Chhetri and Islam, 2008). Similar degradation takes place for other energy sources as well. Unfortunately, crude oil, an energy-equivalent of honey, has been promoted as the root of the environmental disaster. Ignoring the HSS®A® pathway that crude oil has suffered, has created the paradoxes, such as “carbon is the essence of life and also the agent of death” and “enriched uranium is the agent of death and also the essence of clean energy.” These paradoxes are removed if the pathway of HSS®A® is understood. Table 7.5 shows the HSS®A® pathway that is followed for some of the energy management schemes. One important feature of these technologies is that nuclear energy is the only one that does not have a known alternative to the HSS®A® pathway. However, nuclear energy is also being promoted as the wave of the future for energy solutions, showing once again that every time we encounter a crisis, we come up with a worse solution than what caused the crisis in the first place.

Table 7.5 The HSS®A® pathway in energy management schemes. Natural state

1st stage of intervention

2nd stage of intervention

3rd stage of intervention

Honey Crude oil

Sugar Refined oil

Saccharin® High-octane refining

Aspartame® Chemical additives for combating bacteria, thermal degradation, weather conditions, etc.

Solar

Photovoltaics Storage in batteries Chemical Refining, thermal fertilizer, extractions pesticides Hormones, Artificial fat antibiotics (transfat)

Re-use in artificial light forms

Conversion into electricity Conversion into electricity

Re-usage in artificial energy forms

Organic vegetable oil Organic saturated fat Wind Water and hydroenergy Uranium ore

Enrichment

Storage in batteries Dissociation utilizing toxic processes Conversion into electrical energy

Genetically modified crops\s

No-transfat artificial fat

Recombination through fuel cells

Re-usage in artificial energy forms

It is important to note that the HSS®A® pathway has been a lucrative business because most of the profit is made using this mode. This profit also comes with disastrous consequences to the environment. Modern day economics does not account for such long-term consequences, making it impossible to pin down the real cost of this degradation. Zatzman and Islam (2007) pointed out the intangibles that caused the technological and environmental disasters both in engineering and economics. As an outcome of this analysis, the entire problem is re-cast in developing a true science and economics of nature that would bring back the old principle of value proportional to price. This is demonstrated in Figure 7.2. This figure can be related to Table 7.5 in the following way:

Figure 7.2 Economic models have to retooled to make price proportional to real value. Natural state of economics = economizing (waste minimization, meaning “minimization” and “ongoing (dynamic) intention” in the Arabic term, qsd). First stage of intervention = move from intention-based to interest-based. Second stage of intervention = make waste the basis of economic growth. Third stage of intervention = borrow more from the future to promote the second stage of intervention. The above model is instrumental in turning a natural supply and demand economic model into an unnatural perception-based model. This economic model then becomes the driver of the engineering model, closing the loop of the unsustainable mode of technology development.

7.3 Petroleum Refining and Conventional Catalysts Crude oil is always refined in order to create value added products. Refining translates directly into value addition. However, the refining process also involves cost-intensive usage of catalysts. Catalysts act as denaturing agent. Such denaturing creates products that are also unnatural and for them to be useful special provisions have to be made. For instance, vehicle engines are designed to run with gasoline, aircraft engines with kerosene, diesel engines with diesel, etc. In the modern era, there have been few attempts to use crude oil in its natural state, and the main innovations have been in the topic of enhancing performance with denatured fluids. Consequently, any economic calculations presume that these are the only means of technology development and makes any possibility of alternate design invariably unsuitable for economic considerations. Catalysis started to play a major role in every aspect of chemical engineering beginning with the 20th century, in sync with plastic revolution. Today, more than 95% of chemicals produced commercially are processed with at least one catalytic step. These chemicals include the food industry. Figure 7.3 shows the introduction of major industrial catalytic processes as a function of time. Even though it appears that catalysis is a mature technology, new catalysts continue to

be developed. The focus now has become in developing catalysts that are more efficient and muffle the toxicity. World catalysis sales accounted for $7.4 billion in 1997 and today it is estimated to be over $20 billion in 2018.

Figure 7.3 Summary of the historical development of the major industrial catalytic processes per decade in the 20th century (from Fernetti et al., 2000). The processing and refining industry depend exclusively on the use of catalysts that themselves are extracted from natural minerals through a series of unsustainable processing, each step involving rendering a material more toxic while creating profit for the manufacturer. The following operations, mostly involving hydroprocessing applications, use numerous catalysts: tail gas treating; alkylation pretreatment; paraffin isomerisation; xylene isomerisation; naphtha reforming (fixed and moving bed); gasoline desulphurisation; naphtha hydrotreating; distillate hydrotreating; fluidised catalytic cracking pretreatment; hydrocracking pretreatment; hydrocracking; lubricant production (hydrocracking, hydrofinishing and dewaxing);

fixed and ebullated-bed residue hydrotreating; and catalyst-bed grading products. Process description Each of these processes involve selection of proprietary reactor internals that are part of conventional design optimization. The entire optimization takes place after fixing the chemicals to be used during the refining process. As pointed out by Rhodes (1991) decades ago, there are thousands of chemicals involved in numerous processes. Some examples are: Catalytic naphtha reforming; Dimerization, Isomerization (C4); Isomerization (C5 and C6); Isomerization (xylenes); Fluid catalytic cracking (FCC); Hydrocracking, Mild hydrocracking; Hydrotreating/hydrogenation/saturation; Hydrorefining; Polymerization; Sulfur (elemental) recovery Steam hydrocarbon reforming; Sweetening; Claus unit tail gas treatment; Oxygenates; Combustion promoters (FCC); Sulfur oxides reduction (FCC). Yet, each step of the refining process is remarkably simple and can work effectively without the addition of natural material in their natural state (without extraction of toxic chemicals). The distillation of crude oil into various fractions will give naphtha as a fraction which ranges from C5 to 160 degrees (initial to final boiling point). This fraction is further treated to remove sulfur, nitrogen and oxygen which is commonly known as “hydrotreating” and rearranged for improving octane number which can be done by “continuous catalytic reforming (for heavy naphtha which starts from C7)” or “isomerisation” (for light naphtha which contains only C6 and C7 molecules)” and after that blended for desired spec (BS-III, BS-IV or euro IV, euro V etc) and sold in market as gasoline through gas stations.

Some of the processes are given below.

7.3.1 Catalytic Cracking Cracking is the name given to breaking up large hydrocarbon molecules into smaller and more useful bits. This is achieved by using high pressures and temperatures without a catalyst, or lower temperatures and pressures in the presence of a catalyst. The source of the large hydrocarbon molecules is often the naphtha fraction or the gas oil fraction from the fractional distillation of crude oil (petroleum). These fractions are obtained from the distillation process as liquids, but are re-vaporised before cracking. The hydrocarbons are mixed with a very fine catalyst powder. The efficiency being inversely proportional to the grainsize such powder forms are deemed necessary. For this stage, zeolites (natural aluminosilicates) that are more efficient than the older mixtures of aluminium oxide and silicon dioxide can render the process move toward sustainability. For decades, it has been known that zeolites can be effective catalysts (Turkevich and Ono, 1969). Turkevich, P. and Ono, Y., 1969, Catalytic Research on Zeolites, Advances in Catalysis, Volume 20, 1969, Pages 135–152. The whole mixture is blown rather like a liquid through a reaction chamber at a temperature of about 500°C. Because the mixture behaves like a liquid, this is known as fluid catalytic cracking (or fluidised catalytic cracking). Although the mixture of gas and fine solid behaves as a liquid, this is nevertheless an example of heterogeneous catalysis – the catalyst is in a different phase from the reactants. The catalyst is recovered afterwards, and the cracked mixture is separated by cooling and further fractional distillation. There is not a single unique reaction taking place in the cracker. The hydrocarbon molecules are broken up random way to produce mixtures of smaller hydrocarbons, some of which have carbon-carbon double bonds. One possible reaction involving the hydrocarbon C15H32 might be:

This is only one way in which this particular molecule might break up. The ethene and propene are important materials for making plastics or producing other organic chemicals. The octane is one of the molecules found in gasoline. A high-octane gasoline fetches height value in the retail market.

7.3.2 Isomerisation Hydrocarbons used in petrol (gasoline) are given an octane rating which relates to how effectively they perform in the engine. A hydrocarbon with a high octane rating burns more smoothly than one with a low octane rating.

Molecules with “straight chains” have a tendency to pre-ignition. When the fuel/air mixture is compressed it tends to explode, and then explode a second time when the spark is passed through them. This double explosion produces knocking in the engine. Octane ratings are based on a scale on which heptane is given a rating of 0, and 2,2,4trimethylpentane (an isomer of octane) a rating of 100. In order to raise the octane rating of the molecules found in gasoline to enhance the combustion efficiency in an engine, the chemical branch of oil industry rearranges straight chain molecules into their isomers with branched chains. One process uses a platinum catalyst on a zeolite base at a temperature of about 250 °C and a pressure of 13–30 atmospheres. It is used particularly to change straight chains containing 5 or 6 carbon atoms into their branched isomers. The problem here, of course, is that platinum is highly toxic to the environment in its pure form (after mineral processing). The same result can be achieved by using platinum ore and making adjustment to the volume of the reactor. Because platinum ore is natural, it would be free from the toxicity of pure platinum. It is also possible that there are other alternatives to platinum ore – a subject that has to be researched.

7.3.3 Reforming Reforming is another process used to improve the octane rating of hydrocarbons to be used in gasoline. It is also a useful source of aromatic compounds for the chemical industry. Aromatic compounds are ones based on a benzene ring. Once again, reforming uses a platinum catalyst suspended on aluminum oxide together with various promoters to make the catalyst more efficient. The original molecules are passed as vapours over the solid catalyst at a temperature of about 500 °C. This process has two levels of toxic addition that has to be corrected. The first one is platinum and the second one is aluminum oxide and its related promoters. We have already seen how zeolite contains aluminum silicate that can replace aluminum oxide. In addition, other natural materials are available that can replace aluminum oxide. Isomerisation reactions occur but, in addition, chain molecules get converted into rings with the loss of hydrogen. Hexane, for example, gets converted into benzene, and heptane into methylbenzene. The overall picture of conventional refining and how it can be transformed is given in Figure 7.4. The economics of this transition is reflected in the fact that the profit made through conventional refining would be directly channelled into reduced cost of operation.

Figure 7.4 Natural chemicals can turn an sustainable process into a sustainable process while preserving similar efficiency. This figure amounts to the depiction of a paradigm shift. The task of reverting to natural from unnatural has to be performed for each stage involved in the petroleum refining sector. Table 7.6 shows various processes involved and the different derivatives produced. Note that each of the products later becomes a seed for further use in all aspects of our lifestyle. As a consequence, any fundamental shift from unsustainable to sustainable would reverberate globally. Table 7.6 Overview of Petroleum Refining Processes (U.S. Department of Labour, n.d.). Process name Action Fractionation Processes Atmospheric Separation distillation Vacuum distillation Separation

Method

Purpose

Feedstock(s)

Thermal

Separate fractions Separate

Desalted crude oil Atmospheric

Thermal

w/o cracking

tower residual

Conversion Processes – Decomposition Catalytic cracking

Alteration

Coking

Polymerize

Hydro-cracking

Hydrogenate

‘Hydrogen steam reforming

Decompose

‘Steam cracking Visbreaking

Catalytic

Upgrade gasoline Thermal Convert vacuum residuals Catalytic Convert to lighter HC’s Thermal/catalytic Produce hydrogen

Gas oil, coke distillate Gas oil, coke distillate

Decompose

Thermal

Decompose

Thermal

Crack large molecules Reduce viscosity

Atm tower hvy fuel/distillate Atmospheric tower residual

Unite olefins & isoparaffins Combine soaps & oils Unite 2 or more olefins

Tower isobutane/cracker olefin Lube oil, fatty acid, alky metal Cracker olefins

Upgrade low-octane naphtha Convert straight chain to branch

Coker/hydrocracker naphtha

Conversion Processes – Unification Alkylation Combining

Catalytic

Grease compounding Combining

Thermal

Polymerizing

Catalytic

Polymerize

Conversion Processes – Alteration or Rearrangement Catalytic reforming Alteration/dehydration Catalytic

Isomerization

Rearrange

Catalytic

Treatment Processes Amine treating Treatment

Absorption

Deslating

Absorption

Dehydration

Gas oil, cracked oil, residual Desulfurized gas, O2, steam

Butane, pentane, hexane

Remove Sour gas, HCs acidic w/CO2 & H2S contaminants Remove Crude oil contaminants

Drying & sweetening Treatment

Abspt/therm

Remove Liq Hcs, LPG, H2O & alky feedstk sulfur cmpds

*Furfural extraction

Absorption

Upgrade mid distillate & lubes Remove sulfur, contaminants Remove impurities, saturate HC’s Improve visc. index, color Remove asphalt Remove wax from lube stocks Separate unsat. oils

Solvent extr.

Hydrodesulfurization Treatment

Catalytic

Hydrotreating

Hydrogenation

Catalytic

Phenol extraction

Solvent extr.

Abspt/therm

Solvent deasphalting Treatment

Absorption

Solvent dewaxing

Treatment

Cool/filter

Solvent extraction

Solvent extr.

Abspt/precip.

Sweetening

Treatment

Catalytic

Remv H2S, convert mercaptan

Cycle oils & lube feed-stocks High-sulfur residual/gas oil Residuals, cracked HC’s

Lube oil base stocks Vac. tower residual, propane Vac. tower lube oils Gas oil, reformate, distillate Untreated distillate/gasoline

*Note: These processes are not depicted in the refinery process flow chart.

7.4 The New Synthesis For the sustainability criterion to be real, it must be based on knowledge rather than perception. This requirement dictates that the economic schemes include essential features of the economics of intangibles (Zatzman and Islam 2007). The term “intangibles” is essentially the continuous time function, including origin and pathway. For an action, the origin is the intention, and for any engineering product development, the origin is the raw material. Figure 7.5 shows how decisions, based on long-term thinking (well intended) can lead to true success, whereas bad-faith actions lead to failure.

Figure 7.5 Trend of long-term thinking vs. trend of short-term thinking. The human brain makes approximately 500,000 decisions a day (Siegel and Crockett, 2013). The trend in a line of these decisions comprises discrete points. At any one of these points, a bifurcation can begin when a well-intended choice is taken based on appreciating the role of intangibles. The overall trends of long-term and short-term thinking are nevertheless quite distinct. Well-intended decisions can only be made after a knowledge-based analysis. As shown in Figures 7.6 and 7.7, the whole point of operating in the knowledge dimension is that it becomes possible to uncover/discover the intangible factors and elements at work that normally remain hidden or obscured from our view. The following method plagues a great deal of investigation in the natural and social sciences today: 1) advance a hypothesis to test only within the operating range of existing available measuring devices and criteria; then 2) declare one’s theory has been validated when the “results” obtained as measured by these devices and criteria correspond to predictions. This method needs to be replaced by a knowledge-based approach that would ask the relevant and necessary questions about the available measuring devices and criteria before proceeding further. This is the theoretical framework in which we raise the notion of a knowledge-driven economics that would be based truly on economizing rather than wasting.

Figure 7.6 Bifurcation, a familiar pattern from the chaos theory, is useful for illustrating the engendering of more degrees of freedom in which solutions may be found as the “order” of the “phase space,” or as in this case, dimensions, which increase from one to two to three to four.

Figure 7.7 In the knowledge dimension, data about quarterly income over some selected time span displays all the possibilities – negative, positive, short-term, long-term, cyclical, etc. Consider the following pair of figures: The first displays all possible pathways for quarterly income in a time span examined and visible within the knowledge dimension. The second displays a truncation of the same information in two dimensions – a truncation of the kind conventionally presented in business and economics texts, with time as the independent variable. However, the information in the second, while abstractly suggesting a positive trend, actually achieves this effect by leaving out an enormous amount of information about other concurrent possibilities. Figure 7.8 illustrates this point.

Figure 7.8 Linearization of economic data.

7.5 The New Investment Model, Conforming to the Information Age There are three central concerns that the elaboration – including the processes of scientific research and its proper applications – of human social solutions to human social problems must address. These are: 1. Overcoming the “interest trap” and resisting, in general, all other pressures to mortgage the Future in the name of indefinitely extending the Present; 2. Going with Nature rather than against it; and 3. Confronting pressures to expand or intensify the scale of political-economic integration and homogenization with the counter-demand that the matter of “who decides?” be settled first, before anything else. There are already many alternatives to the “interest trap”. These range from conceptions of Islamic banking in which interest and any of its equivalents are truly eliminated as drivers, to variations on that theme, such as the Grameen Bank microcredit schemes, which rely on socially distributing and sharing collective risks as opposed to submitting to corporate management of risk. Even though these models still suffer from the shortcomings of having aphenomenal intentions, i.e., amassing wealth, they do offer an opportunity to install truly sustainable economic models (as opposed to Islamic economics which has a phenomenal intention, i.e., to conform with the purpose of life as ordered in Qur’an). At the bottom, all these ideas share the same basic notion of looking after the future, not by mortgaging it so as to indefinitely extend the present, but rather by working and/or arranging matters in the present, so as to take care of the long-term, and thereby also ensuring the short-term as well. Re-organizing scientific and engineering research, and all other activities in life and work, to go with Nature rather than against it is the single most crucial item on any such sustainability agenda. To reduce the environmental protection agenda as to whether this or that isolated and individually-considered process is sustainable, is to dodge fundamental questions about whether an overall approach is inherently sustainable or bound instead to “come up short” (so

to speak) in the long-term. Reorienting outlooks in this more general sense of seeking solutions that go with Nature rather than against it, provides some checks against these limiting tendencies. This, thereby puts in place a long-term solution to the problem of restoring and maintaining respect for Nature as the mother of all wealth. The matter of “who decides?”, a political question, cannot be separated from economics. Conventional economics of tangibles, incapable of responding to this question, simply ignores or suppresses it. A proper look at the economics of intangibles, on the other hand, offers an approach that makes possible a more efficient, more “economical” end-result across the board, in every activity, from information and high technology to oil and gas development. The biggest piece of the Big Picture, requiring the fullest public societal input, is the determination of the scale of integration. This defines the essence of the obstacle placed in people’s paths by the Aphenomenal Model and its economic doctrine of “consumption without production”, which was elaborated in our previous work. Solving this problem on the basis of enhancing the role of socially positive intentions for the long-term will itself restore a proper appreciation of the fundamental truth that Nature is the mother just as Labor is the father of all wealth. Some may consider it impudent, challenging, or even heretical, to propose an approach that cites very definite starting-points, from the Qu’ran, of all places as well as other start points – all the while explicitly eschewing dogmatic renderings. But, standing more than a decade and a half after the collapse of the bipolar division of the globe, it seems more than clear that a major source of many of our current problems lay precisely with the dogmatic renderings and false certainties propounded by the systems of exploitation of persons by persons, defended fervently by both the American and Soviet superpowers, as they proceeded with subjugating entire peoples and regions to their plundering and rivalries. These dogmatic renderings, however, attacked and in some cases, even crushed the human tendency to imagine, to dream. and to aspire. Meanwhile, the peoples are not going to wait to be saved by others, nor go back to sleep, trusting in others’ promises of salvation. Such is the “cunning of history” that this has become the content of human conscience everywhere throughout the contemporary world. The assault on Islamic faith and beliefs today aims precisely at extinguishing that conscience among all peoples, including Muslims and non-Muslims, along with any other form of outlook that defends or creates space for conscience. Consider here, for example, the widespread notion that Islamic economic principles are closer to capitalism than they are to communism. This is being revived more vigorously than ever, alongside the international expansion of Islamic banking, more than a decade after the disappearance of the Soviet bloc. This dogma is not supported by Qura’nic principles, which would in fact require that people spend more money for others than for themselves, while minimizing waste (The Quran: Chap 2, verse 219). We have discussed this topic extensive in Chapter 6 of this book. Indeed, for many Muslims, spending is maximized when it comes to charity – as the longest-term investment one can make. If money is treated as a trust, many selfindulgent pursuits fall by the wayside. On the other hand, the individual as proprietor, which stands front and centre in the Eurocentric ethos at the core of capitalist social practice and outlook, is expected to determine priorities of expenditure entirely around what expands his/her interests both as an individual and as someone possessing property. Thus, spending on

personal indulgences – including such obsessions as making more money, procuring more sex, and attaining or being in a position to display the accoutrements of higher social status – is deemed inherently no more or less worthwhile than laying money aside for actual objective needs or social responsibilities. If the proprietary Eurocentric self were not part of this competition, placing itself at the centre, spending for others could become instead the foundation of a prosperous economic infrastructure for all. The long-term investment concept envisioned here is illustrated by Figure 7.9. The outstanding feature of this figure is that endowments and charitable giving in which there is no return to the investor – not even an “incentivized” kickback in the form of a deduction on the investor’s income tax liability – generate the highest rates of return for the longest-term investments. In effect, the more social, and less self-centered the intention of the investor, the higher the return. What is most natural about the economics of intangibles is this restoration of an explicit role for intention. Such analysis would make Figure 7.10 relevant to the economic considerations. In today’s world, in the field of actual economic practice, the power of the monied has reached the point where the greater the investment-attracting interest rate, the greater the amount of foreign direct investment. The greater the long-term indebtedness of the receiving economy, both in terms of the amount of the debt, as well as the speed at which it is racked up, and the greater the denial that this was in any way intentional (Perkins 2004), is inherently unreasonable in believing this crisis could not be removed, or even have been averted in the first place, if the intentions of interested investors towards these countries and peoples, and not just their resource riches, had been screened in the first place. In the field of economic theory, within the Eurocentric tradition, this has reached the point where the academic discipline itself is no longer called “political economy”.

Figure 7.9 When intangibles are included, the rate of return becomes a monotonous function of the investment duration.

Figure 7.10 Business turnover cannot be studied with conventional economical theories.

7.5.1 Ignorance-Based Energy Pricing One of the most important aspect of sustainability is to ensure correlation between quality and price. Because sustainable technologies are priceless – so to speak, it has been difficult to put a price on it. Unsustainable energy technologies, however, have a preposterous energy pricing scheme in place – preposterous because it attaches a greater profit for more unsustainable products. As such, it can be called ignorance-based pricing. The fear mongering in name of the ‘energy crisis’ is nothing different than every other lie that has been promoted in last 200 years. It is an insult to human intelligence to attempt to promote these lies in name of science. Let us consider the most abundant and most useful form of energy, sunlight. There are three important features of sunlight: 1. It is 100% efficient. This efficiency emerges from the well known conservation of energy theory: Energy cannot be created or destroyed. There is not one component of sunlight that is not useful. 2. It is continuously beneficial to the environment. Sunlight gives immediate vision, but it also help produce vitamin D. Sunlight is crucial for photosynthesis that would give benefits to the environment by triggering many beneficial chain reactions. As time progresses, environmental benefit of each photon continues to grow. 3. It is of great value. Even if it is possible create carbohydrates artificially, the quality of this product will be questionable, even if this may not be evident to all. When chemical fertilizers were introduced in the name of a ‘Green Revolution’, few realized that fifty years later, this would be most important trigger for the non-Greening of the Earth. Now, let us see how the modern ‘civilization’ has turned this sunlight into artificial light. The following sequence is used: Sunlight → (i) Solar panel (photo voltaic) → (ii) batteries (for storing energy) → (iii) convert into artificial light (most commonly the fluorescent, ‘energy saver’ type). We have converted sunlight that is free for all into artificial light that only a few can afford. In the process, we have lost in all three previously-mentioned items. 1. In efficiency, 100% global sunlight efficiency is dropped into a global efficiency of less than 5%. The efficiency is further reduced when the system is used in a tropical country,

where sunlight is the most abundant! People are used to hearing about local efficiency, which in itself is quite poor (for instance, solar panel efficiency would be some 15% at the most), but when global efficiency (product of various stage efficiency) is determined, the result is more outlandish. The efficiency is further reduced as time progresses. 2. In terms of environmental impact, everything that is used to convert sunlight into artificial light is toxic. Consider the most important component of solar panels – the silicon chips. Anyone who has ever heard about silicon breast implants or lip fusion would know the danger of these silicon chips, particularly when they are allowed to oxidize. This severe degradation of the environment continues as time progresses. Consider also the make-up of the batteries. The most modern batteries are more toxic than earlier types, filled with heavy metals of all sorts. With plastic covers and toxic inside, there is no hope for these batteries to stop polluting the environment indefinitely. The severity is particularly intense when they are allowed to oxidize and oxidation takes place at any temperature (unlike the common perception that it takes place only when materials are incinerated). The final converter into artificial light, the ‘inert’ gas filled tubes continue to radiate very toxic light, so toxic to the eye that they are used to torture people. Hitler didn’t have the luxury to use these toxic lights to torture people, but today they are used routinely in Guantanamo Bay in order to conduct ‘specialized interrogation’. 3. Consider sunlight and how crucial it is for sustaining life. Photosynthesis would not occur without sunlight, vitamin D wouldn’t form, human life-protecting skin pigments would not exist. Consider how important it is for our brain to operate with sunlight. After all, some 70% of all our sensors are located on the retina. The light we use to see also is the light that illuminates our brain. Compare the sunlight with the artificial light. It is so toxic that person can feel delusional in no time if persistently exposed to the artificial light. How can this loss in value be measured? Figure 7.11 shows such light technologies should not have been allowed to be in the market.

Figure 7.11 If regular light bulbs were lousy replacements for sunlight, the florescent light is scandalous – the true shock and awe approach, at your expense (from Islam et al., 2010).

7.5.2 Turning Ignorance-Based into Knowledge-Based What we have is a transition from plus-infinity to minus-infinity. This transition is depicted in the following figure. What is more absurd about this process is: this technological disaster is accompanied by increasing costs of production. A clearly free product is rendered into a very costly product through a series of continuous declines in efficiency, environmental benefit, and shear value of the commodity. Figure 7.12 shows how the engineering pathway has necessarily led to turning everything natural into artificial. Figure 7.13 shows how that ‘engineering’ has benefited profiteering.

Figure 7.12 By converting sunlight into artificial light, we are creating spontaneous havoc that continues to spiral down as time progresses. Imagine trying to build a whole new “science” trying to render this spiral-down mode “sustainable”.

Figure 7.13 In the current technology development mode, cost goes up as overall goodness of a product declines. Is this necessary? If so, why? There are a number of explanations. The first one is: by increasing cost, only a few can afford to commit the requisite capital investment. This is the economic expression of the age-old principle that only money can bring money. In previous centuries, this type of investment would come from the Monarch or the Church. The same principle applied to the new Establishment. Making the investment cost high eliminates competition and increases the chance of the monopoly of the most affluent Establishment (Figure 7.14). Eliminating competition is selfserving for the Establishment, but it takes place by undoing a very fundamental requirement of free-market economies, viz., that they ensure open, free, non-monopolized, non-rigged competition.

Figure 7.14 Increasing threshold investment eliminates competition – the essence of free market economy and economic growth. It also builds yet another aphenomenal foundation of the modern economic system – all of a sudden economics (the concept came from the idea of “economizing,” or reducing waste) is built on wasting. Today, wasting habit is synonymous with civilization. Canada is the best

country to live in (as ranked no. 1 five years in a row by UN) and it is also the wasteful nation (with highest per capita energy consumption). Bangladesh, on the other hand is one of the poorest countries in the world and indeed it is the least wasteful nation (with lowest per capita energy consumption in the world). The second reason behind increasing cost is the increase of labor. Not too long ago, owning slaves was a sign of prosperity. This attitude has not changed. Any labor-intensive organization can profiteer from the mere presence of laborers, the overhead of whom becomes the biggest source of income. This modus operandi also guarantees that the Establishment earns many more for every penny earned by the employee. In a broad sense, this Establishment happens to be the Government, any business, the Church, or even the University system. Now, if we just think of this sunlight and the process that brought us from sunlight to artificial light, we can easily evaluate the merit of the industry that builds upon suntan lotion, sunblockers, sunglasses, etc. It becomes clear why no Establishment ever is interested in having the general public see the big picture. In light of the above analysis, consider the modern-day energy pricing model. It ranks the value of different energy sources as follows (move right with decreasing value):

It is also repeated incessantly that nuclear energy is the most efficient process and even the cleanest. Only in March of 2006, Condoleezza Rice (U.S. Secretary of State and the former Provost of Stanford University) stated the need of nuclear collaboration with India lest “India pollutes with more gas burning”. This simple statement makes a number of spurious assertions. They are: i) local efficiency is the only one that matters; ii) depriving others is the only form of economic theory there is; iii) long-term (or intangible) impacts are either irrelevant or nonexistent, etc. Doubtless the pragmatic argument in response would go something like this: I could admit every one of your charges as far as universal fairness and justice are concerned, but from the standpoint of practical use, energy would still be energy, and once it is supplied, the matter of who can benefit or profit the most is neither here nor there. Well, let us see. If the knowledge model is used, it would become clear that the global efficiency calculation alone will reverse the ranking altogether. It will also reveal that energy from biomass is not the same as the energy from nuclear power. In the same way, mother’s milk was substituted with formula, natural energy source is being replaced with anti-nature sources. It is only a matter of time that it would be revealed that converting energy into electricity is an act of utter ignorance. Remember: energy and matter are being neither created nor destroyed. Whatever transformations they undergo, one into the other, either direction, no new energy or new matter results or emerges. So obviously, pathway becomes hugely important, and hence also control and gatekeeping of pathways. Natural pathways are cheap and accessible for people, but do not submit well or easily to corporate control. Today, mainly reflecting the enormous expansion of the export to the United States of natural

gas extracted within its territory (overwhelmingly in Alberta and British Columbia), Canada is a net exporter of fossil-fuel based forms of energy. Within this picture it has been, most years, a net importer of crude petroleum.

7.5.3 What is So Special about Canadian Energy Pricing The only country voted by the UN five times in a row to be the best country to live in, Canada is known to be the icon of civilization. Canada is also the most wasteful nation on the planet (with highest per capita energy consumption) and one of the least populated. Unless ‘civilization’ is synonymous with inefficiency (higher per capita energy consumption is an excellent indicator of inefficiency), Canada remains the paradox of modern civilization. Not long ago, Quebec families used to give a special gift on the occasion of weddings of their daughters. This gift was a voucher to remove all her teeth so a married life can begin with a new denture. Today’s knowledge shows that this practice is not worthy of a gift and should have never been introduced. When one of the authors asked what is the technological equivalent of such a practice, a graduate student outlined numerous practices (including crude oil ‘refining’) in the petroleum industry. It would not have been so meaningful if the student were not actually employed by a multinational oil company. In recent years, however, reflecting the development since the mid-1980s of synthetic crude oil from the bitumen deposits from the oil sands of Alberta and Saskatchewan – the largest such reserve in the world – Canadian trade statistics have occasionally registered years in which Canada was a net exporter of synthetic crude to the United States. Within Canada, the domestic as well as the export prices of crude petroleum and its refined byproducts track the world oil price, while the export price of Canadian natural gas at U.S. border toll-gate points closely tracks the so-called Henry Hub price on the Gulf coast for natural gas in the United States. The first howling absurdity and serious injustice is that the price Canadians pay for petroleum and its refined byproducts, which – as data from the U.S. Energy Information Administration show – are produced within the country well above the cost of production in the Middle East but far below the world price, are pegged at world prices. The second absurdity and injustice is that the domestic price of natural gas tracks the export price and hence stands (according to data and calculations by the Canadian research economist and commentator Jim Stanford) 25– 35 per cent above the price required to compensate shareholders at a standard rate of return for investing in this production. Refined petroleum is available in Caracas for four cents (U.S.) a litre. There is no justification except greed and more than 90 per cent U.S. ownership of the Canadian oil and gas industry for Canadians being held to ransom either at the world oil price or the Henry Hub gas price. Canada’s historical development as a major supplier of U.S. natural gas needs, as well as both the leading producer of synthetic crude and leading reserve of feedstock for synthetic crude on the planet, provides a sobering lesson in how the apparent good fortune to be awash in huge quantities of strategically crucial energy supplies can severely distort the economic development of a people and their country at the cost of increased foreign domination and dependence.

As early as the 1920s, U.S. oil corporations became interested in the Alberta oil sands mainly as a potential natural gas supply, but Canada’s petroleum requirements were entirely colonised at the time by British finance, which viewed the country purely as a consumer market to be milked and not a locale in which to base or develop energy production from domestic sources. After the Second World War, American oil money came to discover and open up the Leduc field near Edmonton, marking the actual beginning of the long development of today’s booming commercial-scale oil and gas patch centered in Alberta. As part of integrating Alberta petroleum production and Ontario mass market consumption under a U.S. strategic umbrella, the St.-Laurent government in 1956, pushed hard by its Trade and Industry minister C.D. Howe, a U.S. citizen, bulldozed through Parliament the notorious Pipeline Bill authorising a U.S.-backed consortium to build a pipeline to supply industry and residential consumers in Ontario with Alberta petroleum for manufacturing industry and eventually also residential natural gas service. From this point until 1973, Canada operated under what was called “two-price oil”. From Alberta to the Ontario-Quebec border, the only petroleum products would come from Alberta and would be available well above the world price (at that time less than US$2 per barrel). At the same time, the price nevertheless made it a cheaper energy source than, and competitive in price against, Ontario’s vast supplies of hydroelectricity. In Quebec and the Atlantic provinces, on the other hand, crude petroleum would be available entirely and exclusively from offshore the Middle East, Latin America, Nigeria at the world price, substantially below the Alberta price. The absence of any pipelines in this part of Canada, however, meant that the final delivery cost of refined petroleum products from the refineries at Halifax, Montreal, Quebec and eventually Saint John NB would also incorporate a large transportation-cost premium. Following the Arab oil embargo of 1973, the world price of oil tripled to about US$10 per barrel. This put enormous economic pressure on eastern Canada at a time when the federal Liberals barely held power in a weak minority government. This government faced a Conservative party opposition led by a former provincial premier from Nova Scotia, an especially hard-hit victim of the latest sharp turn in the world oil price. Seconding this opposition by a New Democratic Party which had scored big gains in the 1972 federal election by painting the Liberals as defenders and props for what it called “Canada’s corporate welfare bums”. At the head of the NDP’s list stood the American oil majors. The Trudeau government outmaneuvered the opposition by bowing in the direction of the NDP and setting up Petro-Canada as a national oil company. Then it bowed in the direction of the federal Conservatives by unfolding the National Energy Policy. Without officially scrapping the two-price policy, it protected Ontario manufacturers dependent on Alberta oil deliveries by pegging the pipeline-delivered price of oil to the new world price which was well below what Alberta producers had been receiving under the previous arrangements. However, the provincial Conservative government of Peter Lougheed in Edmonton split with the federal Conservatives and condemned the National Energy Policy as an attack on Alberta by the rest of Canada. Until the end of the 1970s, when the world oil price tripled again (from about US$10 to US$30 per barrel), oil prices climbed across Canada, exploration began in

earnest off Canada’s east coast with the aim of bringing oil and gas ashore for export to U.S. markets, and a major project to run a vast oil pipeline down the Mackenzie Valley in the Canadian Arctic to markets in the U.S. was proposed, studied and shelved. During the 1980s, following the interest-rates recession of 1981 to the end of the decade, offshore exploration gradually concentrated on a handful of promising finds that were not taken seriously as development prospects as long as the price of oil was nowhere near US$40 a barrel. During the 1990s, PetroCan was increasingly dismantled, the Hibernia oil and Sable Island gas finds started to come ashore, a pipeline from Nova Scotia to new England was completed and inaugurated and, despite a weak world oil price, research and development to commercialise the Alberta oil sands was heavily subsidized by the Canadian federal government. The Syncrude project with an enormous expansion of natural gas production and export alongside emerged. According to data published in 2005 by the Energy Information Administration, the Syncrude project’s needed for natural gas has slashed the remaining commercial lifetime for Alberta natural gas exports to the US from almost 14 down to about 8.6 years. At the same time, there are at least five projects at various stages of development to deliver liquefied natural gas imports from Russia, Europe and North Africa through specialised port facilities proposed for Nova Scotia, New Brunswick and the St Lawrence River in Quebec to markets in the United States. Canada will be a passer of gas to the U.S., or a user of gas to produce Syncrude mainly for the U.S., but Canadians’ share of usage of these energy riches for their own development will remain constricted by and dependent on the United States.

7.5.4 What is Really the Driver of the Energy Pricing? The general notion of conventional economic theory is that there will always exist a price at which some level of demand for a commodity will call forth the necessary supply. That price is defined as the equilibrium price for that commodity in whatever conditions that normally accompany the production, delivery and sale of the said commodity. The generality of this conception is applied to all products that can be produced, supplied and sold in the market as commodities. In the immediate case of the supply of forms of energy such as petroleum and its refined products, this model is supposed to be applicable to the supply of, say, a barrel of oil of a certain average market grade, but not necessarily to the discovery or sale of an entire oil field. Aberrations from this norm are often explained as the result of the existence of prior conditions that violate a key underlying assumption essential for applying this simple model to any commodity in the first place. That key underlying assumption is that, in the market at the point of exchange, all the different factors involved in this case, the oil prospector/producer (or corporate representative thereof), the refiner (or corporate representative thereof), the transport arrangements to market by pipeline or bulk carrier on rail or ship (or corporate representatives thereof), and the buyer(s) (or corporate representative[s] thereof) are buying or selling their respective portions of these transactions completely independently of each other. Then and only then can the price at which each of their exchanges takes place be considered a true equilibrium, conforming to all the essential features of the supply-demand model of

commodity exchange. In corporate law and accounting practice, this condition is fulfilled if the transacting entities are formally separate and distinct. However, in reality, if beneficial ownership of both sides of any number of these transactions rests with the same controlling group, then the proclaimed independence becomes a sham. Furthermore, if the beneficial owners or controlling interests of some producing field or fields are leveraging future deliveries on contracts with buyers on the basis of current production, while secretly holding back knowledge about changes planned to future production, then the basin supplying the raw material that eventually arrives as commodities in the market and, more specifically, the estimates of its reserves exercises an intangible influence on price that has nothing to do with independent buyer and independent seller reaching any supply-demand equilibrium price. In conventional economic analysis, this influence – being widely understood but intangible and hence nowhere objectively quantified – is dismissed, while (as hinted above) vertical integration of oil producers and their “downstream” control or domination of both upstream and downstream activity is deemed aberrational rather than normal. So long as such oligopolistic structures and associated cartel behaviour (formally illegal but officially undocumented price-setting arrangements among groups of competitors, for example) is not considered normal, the fossil fuel energy commodities brought to the market by such verticallyintegrated operators will continue to be treated as commodities supplied to satisfy a demand at some equilibrium price. That price in this instance is the world oil price. The supply-demand model can account for sudden spot-price increases or decreases in that number, but the one thing it cannot account for is why or how that price should have tripled suddenly in late 1973, tripled again in 1979, fallen back by two-thirds between 1991 and 2000, and tripled again since 2004. On the contrary, so long as the applicability of the supplydemand equilibrium model to energy commodities is held to be above and beyond criticism, the causes and consequences of these extremely dramatic shifts remain utterly mysterious and inexplicable. From a business standpoint, they become risk factors and contingencies against which funds are to be set aside. From a research standpoint, either one is honest, investigates these matters as the effects of known or discoverable causes, and integrates an understanding of them as part of understanding the entire energy pricing picture – or one becomes the tool or mouthpiece of the currently dominant energy cartel.

7.5.5 Corruption is Happiness? The Two Extrema. A recent survey by Transparency International showed Bangladesh (the most populated country in the world) as the most corrupt. At the same time, the TI survey showed Bangladesh to be the happiest country. This seems to be consistent with yet another extreme status of Bangladesh. It is also the country with least per capita energy consumption. In engineering, this would be equivalent to highest efficiency (unless, of course, if humans are not all equal). In this, Canada is ranked the least energy efficient country (with highest per capita expenditure of energy). Canada is also the country that is ‘run by hidden hands behind hidden hands’ (in the words of Maclean’s Magazine editor Peter C. Newman). According to a longtime Ottawa policy maker,

cited by the Canadian political commentator and journalism professor Walter Stewart, “two things you don’t know what they are made from – 1) hot dogs; and 2) policy making.” 7.5.5.1 Civilization for Whom? Dr. Abdul Kalam (former President of India) said: “Ask what we can do for India and do what has to be done to make India what America and other western countries are today”. The pressure has been enormous to emulate the West. Yet, the West is also the place that does not have a single city that would offer unpolluted air, non-chemical water, fresh milk, or truly organic fresh food. In fact, it is ironic that the West, instead of promoting normal reproduction of the species or the healthiest nutritional sources for young peoples’ growth, promotes samesex marriage and even bans the sale of fresh, i.e., un-Pasteurized, milk. Even though doctors know not a single chemical medicine cures any disease, one who prescribes herbal remedies risks jail. A common question asked in America is, “Are we better off today than we were four (4) years ago?” This particular numerological fascination is interesting on two counts. First, contrast the evident obsession with the 4-year term with the general Eurocentric/”western” dismissal of the mention of “70 virgins” in the Qu’ran as some kind of argument against Muslim notions of quantity. Second, why limit the comparison to the last four years, the time-frame between U.S. presidential elections? When we consider the state of civilization, a more appropriate question might be: “Are we better off today than we were 4,000 years ago?” If one considers the characters of world leaders from that time (for instance, Moses) and now (where to start?), the answer becomes very clear. Figure 7.15 shows how the world is currently divided into two extremes. So-called “technological development” (which Nobel Laureate Robert Curl called “the technological disaster”) in developed countries has taken place at the expense of technological dependence of another group, namely, developing countries. It just so happens that the latter constitutes 80% of the planet’s human population. If the human being is the best creation of God, such a division of humankind into two such extremes suggests that the best creation of God can hardly be said to be living up to his expectation. The immediate outcome of this ‘technological’ division is an economic division.

Figure 7.15 Because of the “stupidity, squared” mode, technology development in the west continues at the expense of technology dependence in the east. In this, the developing countries are ignorant because they think that this technological dependence is actually good for them and developed countries are ignorant because they think one can exploit others at the level of obscenity and get away with it in the long-term. Are you better off today than you were 4000 years ago? You don’t have to consult Moses to find an answer. Figure 7.17 examines precisely that. The most important new features of this situation are a growing realization that developing countries have the power to reverse the trend, and that it is to the advantage of the developed countries to forego the imperialist agenda responsible for generating such an inherently unstable, and destabilizing, polarization in the first place: good for some and bad for others is bad for everyone and no longer acceptable in the Information Age. The world came to this absurd level of disparity by defining civilization according to the wasting habit of a nation. If waste is also a sign of ignorance, one must admit modern civilization has become defined by the level of ignorance. Consider … Canada. That country was ranked by the UN as the best place to live on Earth five years in a row (1998–2003) – and is also the world’s most energy-imprudent nation. Canada consumes three times more per capita of energy than does Japan, and 10 times more than that of China. Yet, pundits suggest that, when it comes to energy sustainability, the problem is – population, never the lifestyle. They would ask the developing countries to emulate the wasting habits of the developed countries – in fact, they would make that a precondition before offering any ‘help’. The extension of this logic illustrates the consequences which no one wants to spell out: one would have to kill off 50 Indians or Bangladeshis so one Canadian could maintain his wasting habit. How could the pundits be so wrong? Here one must look at the history of science, social science, and economics in the modern age to find answers. The pundits themselves, no less than their punditry, are products of a system based on a false, top-down, implosive model. 7.5.5.2 Root of the Technological Disaster Neither majorities, nor consensus, account for why or how the world actually undergoes transformative change. The German philosopher Georg Wilhelm Friedrich Hegel described the essence of fundamental change involving both the form and the content of any entity, material or philosophical, as “the throwing off of the form, the transformation of the content”. The path that

opens the door to finally transforming the content of policy in a decisive and fundamental way is that the polity affirms its sovereign right over the policy-making system. In this, the role of the government is so profound that throughout human civilization, this role has been the point of greatest contention. However, what remains controversy-free is the fact that the world has always been divided into three groups. The first, a pro-humanity group – these are the prophets, the good leaders, and also the most controversial characters. They are controversial because the world is seldom ruled by these people. Instead, the Establishment usually is controlled by the members of the other extreme – the group with staunchly anti-humanity, antinature agenda. The third group comprises the vast majority of mankind, lacking any particular awareness of any mission in life, waiting on the fence, often giving the appearance of “sheep” – and crucially relied upon to provide the biggest bulwark in support of the status quo. Figure 7.16 makes this point clear and shows how the dependency of the thrid world countries is deliberate.

Figure 7.16 As a result of the over-extension of credit and subsequent manipulation (by the creditors: Paris Club etc.) of the increasingly desperate condition of those placed in their debt, nostrums about “development” remain a chimaera and cruel illusion in the lives of literally billions of people in many parts of Africa, Asia and Latin America. (Here the curves are developed from the year 1960.)

Figure 7.17 Pathways destructive of intangible social relations (cf. Figure 1) Distinguishing a good leader, a bad leader, or a non-leader is actually neither rocket science or nor even brain surgery: those whose actions are based on long-term benefits (this also means doing things for others, for intangible benefits, and for improvement of nature) are the Prophets, whom I refer to as the ‘Moses’ figures. Those acting for the short-term (this also means they are obsessed with self-interest and tangible benefits) are the compulsive psychopaths incapable of empathy, whom we call the ‘Pharaohs’. The decisive moment of transformative change begins with the Prophets openly challenging the Pharaohs, and it ends with the remainder finally making their move. What has not changed in 4000 years is the model adopted by the Pharaohs to block transformative change: it is top-down, it is implosive, it leads to infinite injustice – our research group has dedicated an entire book, Revolution in Education, to the subject – and it can be overcome and defeated with a Knowledge-based, “bottom-up” model. Adapting this alternative model, and without having to make choices like “throwing out the baby with the bathwater” (a major argument often leveled against advocates and advocacy of systemic transformation), it is possible to dispose of the historical crib and bad conscience of the top-down model and enable transformative changes to emerge. In the top-down model, a decision is based on self-interest and short-term gain. Such is its motivation that anyone acting according to this model cannot and will not publicly divulge it. Instead, they resort to disinformation. This essentially involves planted stories. This phenomenon, extremely widespread in journalism, social science and current/public affairs, also “sets a tone” that infests, and infects, research, investigation and knowledge-gathering in all fields. Even the approaches taken to numerical modeling in scientific and engineering research, which many like to believe are insulated from such impure outside influences, are not immune. In fact, our research indicates that disinformation plays a role much more insidious in “science” and research (not Science – the process, but as understood in the modern age) than in

any other field. A most blatant case: Hiroshima’s destruction and especially the tragic ongoing aftermath – subsequent generations dying prematurely from previously unheard-of disorders created by the scale and concentrated dosage of radioactivity unleashed by the first atomic weapon – serve as the grim reminder of disaster produced simultaneously at several levels. In 1954, the last year of Einstein’s life, he admitted to an old friend, “I made one great mistake in my life—when I signed the letter to President Roosevelt recommending that atom bombs be made; but there was some justification—the danger that the Germans would make them.” Now we can confidently say that the atomic bomb that was touted as the biggest engineering achievement of the time was actually the worst possible tool to serve the Pharaohs. Of course the abuse of science to advance the cause of the evil has skyrocketed since then. However, the root of this whole line of thought dates back to the Eurocentric culture that saw nothing but the implosive top-down model throughout history. People who are leading these nations are the agents of status quo and serve only self interests. We should hardly be surprised that our policy makers continue to give themselves raises. Driven by self-interest and short-term gains, the central aim is utterly reproductive: to ensure one’s own job security before everything else. Those setting policy must be the servants of those governed by the policies being set; sovereignty must reside with the ruled and not the rulers. This condition is precisely what is missing both in theory as well as in practice in Canada (The Queen of Great Britain is the Sovereign in Canada) and quite a few other countries; it is also missing in practice even in countries where the ruled nominally exercise sovereignty, such as the United States, France, etc. So long as it is missing in practice, what we discuss here as “the implosive model” will prevail. 7.5.5.3 How Does the Top-Down Model Invade ‘Civilization’? As pointed out earlier in the discussion of the pathways, the essence of Eurocentric culture lies in that Establishment’s promotion of a top-down, unstable model that produces real chaos for those outside the sealed train (Figure 1.1b). As can be seen in Figure 1.1b, in this process, a decision precedes actual investigation or even collection of data. As such, the entire exercise of data collection and data processing is solely motivated by maximizing short-term profits. So, how logical or illogical the outcome is vis a vis a particular project is irrelevant. That itself makes it abundantly clear that promotion of every project is deliberately ill-intended. They achieve this state by deliberately promoting lies (among complete lies, surely we cannot exclude the half-truth). Prozac, the wonder drug, now is known to cause suicidal behavior. Fluorescent light that was so white that it could replace sunlight is a torture to human eyes and causes depression. What was the promise? It would elevate your mood and get you out of cabin fever. Microwaves destroy 97% of the nutrients in vegetables and release toxins from plastic containers, including baby bottles. When did we find this out? Long after microwave ovens have been popularized to the extent of obscenity even in the ‘darkest’ part of the world. Anti-oxidants, the ones that would give us ageless body, are now found to cause lung cancer. Where (on this side of the grave) does the lying end? Today, in apparent manifest of the helplessness promoted by the Eurocentric culture, even the third world cannot think of making use of natural sunlight, solar energy, water power, and most importantly human asset. Why else

would the indoor temperature of Dubai be maintained at 18 °C while using electric heater system would be in action for supplying hot water? Much has been talked about this mode in the context of the famed Burj Al Arab Jumeirah (Picture 7.1) and Burj Khalifah (world’s tallest building). Why else would Bangladesh be considered to be among the poorest nations in the world with its human resource concentration at highest in the world? What mindset could be behind keeping the lights on (the kind that promotes depression) in a country that has the most sunshine in the world? The situation indeed is worse in Canada and USA. With temperatures running in the range of -40 °C, people maximize the usage of refrigerator, maintain above 20 °C indoor, add Chlorine (a toxic agent) to render natural water ‘potable’ and then only drink 5% of it while wasting the rest, and even resort to Pasteurizing honey (the only known product that does not rot). Picture 7.2 shows the absurdity of using electricity to produce snow in a place inundated with natural snow. In our current technology development mode, we teach how to fight vacuum in outer space, how to denature natural materials, how to use electricity to create ice in North Pole, how to heat with microwave in Saharan desert sizzling at 120 C sand temperatures, how to package fresh veggies in Amazon, how to stay dry in rainforest, how to clean without water, how to use toxin to purify water, how to dry clean with carbon tetrachloride, how to sweeten with neurotoxin, how rid ourselves of dirt, how to reduce our oxygen intake so we can live longer, how to decaffeinate coffee, how to bleach coffee filter, tampon, and toilet paper, … the list is truly endless.

Picture 7.1 The 8-Star hotel in hot and sunny Dubai, all rooms fully air conditioned and totally isolated so that natural sunlight is replaced by fluorescent light and water heated with electricity. Under the glitter of external, it is difficult to understand what the third world has been reduced into.

Picture 7.2 Even the day when Nova Scotia, Canada was hit with the worst snowstorm in its recorded history, the indoor temperature was maintained at 20 °C, well above the room temperature of Bangladesh in a winter night and refrigerators were powered by electricity that was so difficult to come by that day. The model does not stop with technology, it continues to reap havoc in social side, converting everything good for the long-term to appealing to the short-term and only short-term. On the social side, this model translates into the following conversion of good into evil.

History, culture → entertainment (e.g., fashion show, ‘beauty’ context) Smiles → Laughter (e.g., stand-up comedy) Love → Lust Love of children → Pedophilia Passion → Obsession Contentment → Gloating Quenching thirst → Bloating Feeding hunger → Gluttony Consultation → Politicking Freedom → Panic Security → Fear Liberation → Enslavement Dream → Fantasy Justice → Revenge Science → “Technological development” Social progress → “Economic development” Positive internal change (true civilization) → Negative external change (true savagery) 7.5.5.4 Other “Problems” Said to “Plague” Third World Countries Take a look at some of the following principles that are taught in every underdeveloped countries. 1. Population is a problem: Completely unaware that human capital is the most important asset of a nation, particularly in the Information Age, it has been promoted since the 1960s that population is the single most important problem in a developing country. The concept of family destruction (promoted as ‘Family planning’, ‘planned parenthood’, etc. by people who considered human beings as liabilities and led to the simultaneous decline in population and family value in all western countries, except USA) was possibly the only strategy that was kept intact throughout many regimes in the developing countries. Couple this with the fact the west is still very active in plundering and wasting natural resources and a Canadian today (Canada is number one in per capita energy consumption) spends some fifty times more energy than say by a Bangladeshi, one realizes the scheme of reducing the population in developing countries has nothing to do with global sustenance and everything to do with a scheme of global dominance. Today, one can see large ads promoting the slogan, “Plant trees, save the country” along with “Girl or a boy, one child is enough” everywhere in urban areas of the ‘least developed’ countries. Does this not imply

trees are more important than humans? After all, both trees and humans take nutrition from the same soil and breathe from the same air, why then we wish to reduce human population and increase the number of trees? No one complains that Saudi Arabia has too much crude oil, why are we complaining Bangladesh (as an example) has too many human beings (however, crude). A recent analysis shows the number of tax payers in Bangladesh has actually declined in last 30 years while the number of dependent and corrupt people has increased. This is a rather worrisome statistics. Figure 7.18 illustrates this point. 2. Heat is a problem, you have too much sun: Fifty years ago, manual fans (both domestic and offices) were replaced in favor of electric fans. The idea was that heat had to be beaten. Even though Thomas Alva Edison declared a long time ago, “I would put money on the sun and solar energy. What a source of power! I hope we don’t wait ‘til oil and coal run out before we tackle that”, sun is designated as the enemy in hot countries, direct solar energy had to be forsaken in the east. In turn, each electric fan started to blow hot air (literally). To make sure that outside heat does not get in, insulation was improved, trapping more hot air in every household. People still felt some relief because they were not working (the fan puller lost his job and the person got no exercise) and they were feeling a bit cooler because of losing latent heat (sweating). When people felt dehydrated, they were served with Coke (the fashionable drink) that actually dehydrates people. It is so hot in these countries that there has to be millions of motors running to blow hot air and when people feel dehydrated they have to be drinking Coke that dehydrates. 3. Chemical “solutions” are solutions: There are hundreds of chemicals that have been introduced in the daily lifestyle of the developing countries. A down to earth example is the the mosquito problem in tropical countries. No less than dozen types of exported chemical poisons are being sprayed (with aerosol and all) everyday in practically all households of in these countries. The idea is to destroy all mosquitoes (this ‘war on invisible’ approach does not help any cause). As millions of dollars are spent on imported chemicals that are sprayed everywhere, the bird and bat populations disappear. Considering that each bat can consume thousands of mosquitoes daily, the natural process of mosquito control is aptly replaced by ‘shock and awe’ treatment of mosquito repellants. As the strength of these chemicals increases, of course, mosquitoes grow immune to all these chemicals. In the meantime, all local means of fighting mosquitoes have been eradicated, including numerous bats that left the city along with any other type of bird. 4. Interest rate should be high: Figure 7.19 shows how the interest rate alone can spiral the economy down. In our research group, we have studied the interest rate structure of many countries and there is a direct correlation between economic misery and interest rate, the misery being measured in terms of fall in export, unemployment, rising corruption, foreign dependence, and so on. When a whole nation is taught to invest in ‘interest’-based schemes, there is no way that nation can survive economic onslaught. On a personal level, this much like the credit card companies who create dependence by enticing customers into spending more than they can afford only to become totally bankrupt at the end. If one couples this with the craze of spending for ‘foreign’ goods, transferring money in foreign banks, and the perpetual corruption that sees no end in sight, the future of these countries

remain bleak. Figure 7.20 shows how other factors are also dependent on interest rates. It is no wonder, local economists (e.g. Dr. Yunus of Bangaldesh) have carefully avoided any in-depth study of this graph. For them, there is no alternative to western economy. 5. Definition of malnutrition, poverty-line, sustainability, and other moving targets: Malnutrition does not mean the lack of processed food. In fact, someone who consumes high calorie food is likely to become obese. In developing countries, the children in urban have grown fatter and they don’t have any of doing exercise (physical or mental), yet they are considered to be the most unlikely victim of malnutrition. Some would say, this is a move to the right direction. After all, obesity in North America has gone up 400% in recent years. It is important to realize that the definitions that are thrown at third world countries are carefully developed to continue economic slavery and political dominance and are largely focused on externals and tangible values. In Information age, intangibles are more important and such factors as, eating when one is hungry (not when it is fashionable to eat), drinking water (and not other dehydrating agents) when one is thirsty, and so on have become more important than old metrics that have created the culture of extremism in the west.

Figure 7.18 Is it the total population that makes the economy plummet, or rather the growth in the corrupt portion that one should worry about?

Figure 7.19 The role of interest rate and the operating principles around the world.

Figure 7.20 The role of interest rate in driving economic decline. 1 ‘Essence,’ in the sense that the inner characteristics of natural processes are never followed; the chief characteristic being internal sustainability, which is violated the moment that the aphenomenal ‘imitation’ of nature is developed.

Chapter 8 Economics of Sustainable Energy Operations 8.1 Introduction The evolution of human civilization is synonymous with how it meets its energy needs. Few would dispute the human race has become progressively more civilized with time. Yet, for the first time in human history, an energy crisis has seized the entire globe and the very sustainability of this civilization itself has suddenly come into question. If there is any truth to the claim that Humanity has actually progressed as a species, it must exhibit, as part of its basis, some evidence that overall efficiency in energy consumption has improved. In terms of energy consumption, this would mean that less energy is required per capita to sustain life today than, say, 50 years earlier. Unfortunately, exactly the opposite has happened. We used to know that resources were infinite, and human needs finite. After all, it takes relatively little to sustain an individual human life. Things have changed, however, and today we are told, repeatedly: resources are finite, human needs are infinite. What is going on? In this chapter, root causes of sustainability are revealed and remedies proposed so that new technologies are inherently sustainable. This chapter evaluates the sustainability of energy technologies. Conventional sustainability assessments usually focus on the immediate impacts of technology. This chapter introduces a new methodology to posit a broader definition of true sustainability by examining a time-tested criterion, as well as environmental, economic and social variants, to assess the sustainability of participatory energy development techniques. This chapter shows that conventional petroleum technologies are less toxic than conventional ‘renewable’ energy technologies. On the other hand, the sustainable version of petroleum technologies is far superior to conventional ‘renewable’ technologies but at part with direct solar energy, organic biodiesel and wind energy. In this chapter, the mysteries of current energy crisis are unraveled. The root causes of unsustainability in all aspects of petroleum operations are discussed. It is shown how each practice follows a pathway that is inherently implosive. It is further demonstrated that each pathway leads to irreversible damage to the ecosystem and can explain the current state of the Earth. It is shown that fossil fuel consumption is not the culprit, it is rather the practices involved during exploration all the way to refining and processing that are responsible for the current damage to the environment. Discussion is carried out based on two recently developed theories, namely the theory of inherent sustainability and the theory of knowledge-based characterization of energy sources. These theories explain why current practices are inherently inefficient and why new proposals to salvage efficiencies do not have any better chance to remedy the situation. It is recognized that critiquing current practices may be necessary, but it is not sufficient. The second part of the chapter deals with practices that are based on long-

term. This can be characterized as the approach of obliquity, which is well-known for curing both long-term and short-term problems. It stands 180 degrees opposite to the conventional bandaid approach, which has prevailed in the Enron-infested decades. This chapter promises greening of every practice of the petroleum industry, from management style to upstream to downstream. Finally, this chapter presents the true merit of the long-term approach that promotes sustainable techniques that are socially responsible, economically attractive, and environmentally appealing.

8.2 Issues in Petroleum Operations Petroleum hydrocarbons are considered to be the backbone of the modern economy. The petroleum industry that took off from the golden era of 1930’s never ceased to dominate all aspects of our society. Until now, there is no suitable alternative to fossil fuel and all trends indicate continued dominance of the petroleum industry in the foreseeable future (Islam et al., 2018). Even though petroleum operations have been based on solid scientific excellence and engineering marvels, only recently it has been discovered that many of the practices are not environmentally sustainable. Practically all activities of hydrocarbon operations are accompanied by undesirable discharges of liquid, solid, and gaseous wastes (Khan and Islam, 2007), which have enormous impacts on the environment (Islam et al., 2010). Hence, reducing environmental impact is the most pressing issue today and many environmentalist groups are calling for curtailing petroleum operations altogether. Even though there is no appropriate tool or guideline available in achieving sustainability in this sector, there are numerous studies that criticize the petroleum sector and attempt to curtail petroleum activities (Holdway, 2002). There is clearly a need to develop a new management approach in hydrocarbon operations. The new approach should be environmentally acceptable, economically profitable and socially responsible. This follows the need to develop a new economic tool to evaluate sustainable technologies. The crude oil is truly a non-toxic, natural, and biodegradable product but the way it is refined is responsible for all the problems created by fossil fuel utilization. The refined oil is hard to biodegrade and is toxic to all living objects. Refining crude oil and processing natural gas use large amount of toxic chemicals and catalysts including heavy metals. These heavy metals contaminate the end products and are burnt along with the fuels producing various toxic byproducts. The pathways of these toxic chemicals and catalysts show that they severely affect the environment and public health. The use of toxic catalysts creates many environmental effects that make irreversible damage to the global ecosystem. A detailed pathway analysis of formation of crude oil and the pathway of refined oil and gas clearly shows that the problem of oil and gas operation lies during synthesis or their refining.

8.2.1 Pathway Analysis of Crude and Refined Oil and Gas 8.2.1.1 Pathways of Crude Oil Formation Crude oil is a naturally occurring liquid found in formations in the Earth consisting of a

complex mixture of hydrocarbons consisting of various lengths. It contains mainly four groups of hydrocarbons among, which saturated hydrocarbon consists of straight chain of carbon atoms, aromatics consists of ring chains, asphaltenes consists of complex polycyclic hydrocarbons with complicated carbon rings and other compounds mostly are of nitrogen, sulfur and oxygen. It is believed that crude oil and natural gas are the products of huge overburden pressure and heating of organic materials over millions of years. Crude oil and natural gases are formed as a result of the compression and heating of ancient organic materials over a long period of time. Oil, gas and coal are formed from the remains of zooplankton, algae, terrestrial plants and other organic matters after exposure to heavy pressure and temperature of Earth. These organic materials are chemically changed to kerogen. With more heat and pressure along with bacterial activities, oil and gas are formed. Figure 8.1 is the pathway of crude oil and gas formation. These processes are all driven by natural forces.

Figure 8.1 Crude oil formation pathway (After Chhetri and Islam, 2008). 8.2.1.2 Pathways of Oil Refining Fossil fuels derived from the petroleum reservoirs are refined in order to suit the various application purposes from car fuels to aeroplane and space fuels. It is a complex mixture of hydrocarbons varying in composition depending on its source. Depending on the number of carbon atoms the molecules contain and their arrangement, the hydrocarbons in the crude oil have different boiling points. In order to take the advantage of the difference in boiling point of different components in the mixture, fractional distillation is used to separate the hydrocarbons from the crude oil. Figure 8.2 shows general activities involved in oil refining.

Figure 8.2 General activities in oil refining (Chhetri and Islam, 2007b). Petroleum refining begins with the distillation, or fractionation of crude oils into separate hydrocarbon groups. The resultant products of petroleum are directly related to the properties of the crude processed. Most of the distillation products are further processed into more conventionally usable products changing the size and structure of the carbon chain through several processes by cracking, reforming and other conversion processes. In order to remove the impurities in the products and improve the quality, extraction, hydrotreating and sweetening are applied. Hence, an integrated refinery consists of fractionation, conversion, treatment and blending including petrochemicals processing units. Oil refining involves the use of different types of acid catalysts along with high heat and pressure (Figure 8.3). The process of employing the breaking of hydrocarbon molecules is the thermal cracking. During alkylation, sulfuric acids, hydrogen fluorides, aluminum chlorides and platinum are used as catalysts. Platinum, nickel, tungsten, palladium and other catalysts are used during hydro processing. In distillation, high heat and pressure are used as catalysts. The use of these highly toxic chemicals and catalysts creates several environmental problems. Their use will contaminate the air, water and land in different ways. Use of such chemicals is not a sustainable option. The pathway analysis shows that current oil refining process is inherently unsustainable.

Figure 8.3 Pathway of oil refining process (After Chhetri et al., 2007). Refining petroleum products emits several hazardous air toxins and particulate materials. They are produced while transferring and storage of materials and during hydrocarbon separations. Table 8.1 shows the emission released during the hydrocarbon separation process and handling. Table 8.1 Emission from a Refinery (Environmental Defense, 2005). Activities Material transfer and storage

Emission – Air release: Volatile organic compounds – Hazardous solid wastes: anthracene, benzene, 1,3-butadiene, curnene, cyclohexane, ethylbenzene, ethylene, methanol, naphthalene, phenol, PAHs, propylene, toluene, 1,2,4-trimethylbenzene, xylene Separating – Air release: Carbon monoxide, nitrogen oxides, particulate matters, sulfur hydrocarbons dioxide, VOCs – Hazardous solid waste: ammonia, anthracene, benzene, 1,3-butadiene, curnene, cyclohexane, ethylbenzene, ethylene, methanol, naphthalene, phenol, PAHs, propylene, toluene, 1,2,4-trimethylbenzene, xylene Table 8.2 shows the primary waste generated from an oil refinery. In all processes, air toxics and hazardous solid materials, including volatile organic compounds are present.

Table 8.2 Primary wastes from oil refinery (Environmental Defense, 2005). Cracking/coking

Alkylation and reforming Air releases: carbon monoxide, nitrogen, oxides, Air releases: particulate matter, sulfur, dioxide, VOCs carbon monoxide, nitrogen oxides, particulate matter, sulfur dioxide, VOCs Hazardous/solid wastes, wastewater, ammonia, Hazardous/solid anthracene, benzene, 1,3-butadiene, copper, cumene, wastes: cyclohexane, ethylbenzene, ethylene, methanol, ammonia, naphthalene, nickel, phenol, PAHs, propylene, toluene, benzene, 1,2,4-tri-methylbenzene, vanadium (fums and dust), xylene phenol, propylene, sulfuric acid aerosols or hydrofluoric acid, toluene, xylene Wastewater

Sulfur removal Air releases: carbon monoxide, nitrogen oxides, particulate, matter, sulfur dioxide, VOCs Hazardous/solid wastes: ammonia, diethanolamine, phenol, metals, Wastewater

There are various sources of emissions in the petroleum refining and petrochemical industries, and the following are the major categories of emission sources (US EPA, 2008). Process Emissions In petroleum refining and petrochemical industries, the typical processes that take place include separations, conversions, and treating processes, such as cracking, reforming, isomerization, etc. The emissions arising from these processes are termed as process emissions, and are typically released from process vents, sampling points, safety valve releases, and similar items. Combustion Emissions Combustion emissions are generated from the burning of fuels, which is done for production and transportation purposes. The nature and quantity of emissions depends upon the kind of fuel being used. Generally, combustion emissions are released from stationary fuel combustion sources like furnaces, heaters and steam boilers, but they can also be released from flares, which are used intermittently for controlled release of hazardous materials during process upsets. Fugitive Emissions

Fugitive emissions include sudden leaks of vapors from equipment or pipelines, as well as continuous small leaks from seals on equipment. These emissions are not released from vents and flares, but may occur at any location within a facility. Sources of fugitive emissions are mostly valves, pump and compressor, and piping flanges. Fugitive emissions are a source of growing concern, as their effective control requires good process safety mechanisms for mitigation, as well as ongoing lead detection and repair programs. Storage and Handling Emissions These emissions are released from the storing and handling natural gas, oil, and its derivatives. This is a potential problem in every petroleum refining and petrochemical industry, including any product distribution sites. Handling mainly includes loading and unloading operations for shipping products to customers. Though transport of many refinery products is through pipelines, some other means like marine vessels and trucks also exist. In these cases, there might be emissions during material transfer to these vehicles. Auxiliary Emissions Auxiliary emissions originate from units like cooling towers, boilers, sulfur recovery units, and wastewater treatment units. Atmospheric emissions from cooling towers mainly include gases, which are stripped when the water phase comes into contact with air during the cooling process. In wastewater treatment units, emissions may arise by stripping of the VOCs from contaminated wastewater in the pond, pits, drains or aeration basins. 8.2.1.3 Pathways of Gas Processing Natural gas is a mixture of methane, ethane, propane, butane and other hydrocarbons, water vapor, oil and condensates, hydrogen sulfides, carbon dioxide, nitrogen, some other gases and solid particles. The free water and water vapors are corrosive to the transportation equipment. Hydrates can plug the gas accessories creating several flow problems. Other gas mixtures such as hydrogen sulfide and carbon dioxide are known to lower the heating value of natural gas by reducing its overall fuel efficiency. There are certain restrictions imposed on major transportation pipelines on the make-up of the natural gas that is allowed into the pipeline called pipe ‘line quality’ gas. This makes mandatory that natural gas be purified before it is sent to transportation pipelines. The gas processing is aimed at preventing corrosion, environmental and safety hazards associated with transport of natural gas. The presence of water in natural gas creates several problems. Liquid water and natural gas can form solid ice-like hydrates that can plug valves and fittings in the pipeline (Nallinson, 2004). Natural gas containing liquid water is corrosive, especially if it contains carbon dioxide and hydrogen sulfide. Water vapor in natural gas transport systems may condense causing a sluggish flow. Hence, the removal of free water, water vapors, and condensates is a very important step during gas processing. Other impurities of natural gas, such as, carbon dioxide and hydrogen sulfide generally called as acid gases must be removed from the natural gas prior to its transportation (Chakma, 1999). Hydrogen sulfide is a toxic and corrosive gas which is rapidly oxidized to form sulfur dioxide in the atmosphere (Basu et al., 2004). Oxides of nitrogen found in traces in the natural gas may cause ozone layer depletion and global

warming. Figure 8.4 illustrates the pathway of natural gas processing from reservoir to end uses. This figure also shows various emissions from natural gas processing from different steps. After the exploration and production, natural gas stream is sent through the processing systems.

Figure 8.4 Natural gas “well to wheel” pathway. Figure 8.5 is the schematic of general gas processing system. Glycol dehydration is used for water removal from the natural gas stream. Similarly, methanolamines (MEA) and Diethanolamine (DEA) are used for removing H2S and CO2 from the gas streams (Figure 8.5). Since these chemicals are used for gas processing, it is impossible to completely free the gas from these chemicals. Glycols and amines are very toxic chemicals. Burning of ethylene glycols produces carbon monoxide (Matsuoka et al., 2005) and when the natural gas is burned in the stoves, it is possible that the emission produces carbon monoxide. Carbon monoxide is a poisonous gas and very harmful for the health and environment. Similarly, amines are also toxic chemicals and burning the gas contaminated by amines produces toxic emissions. Despite the prevalent notion that natural gas burning is clean, the emission is not free from environmental problems. It is reported that one of the highly toxic compounds released in natural gas stoves burning (LPG in stoves) is isobutane which causes hypoxia in the human body (Sugie et al, 2004).

Figure 8.5 Natural gas processing methods (Redrawn from Chhetri and Islam, 2006b). 8.2.1.3.1 Pathways of Glycol and Amines Conventional natural gas processing process consists of applications of various types of chemicals and polymeric membranes. These are all synthetic products that are derived from petroleum sources but after a series of denaturing. The common chemicals used to remove water, CO2 and H2S are Diethylene glycol (DEG) and Triethylene glycol (TEG) and Monoethanolamines (MEA), Diethanolamines (DEA) and Triethanolamine (TEA). These are synthetic chemicals and have various health and environmental impacts. Synthetic polymers used as membrane during gas processing are highly toxic and their production involves using highly toxic catalysts, chemicals, excessive heat and pressures (Chhetri et al, 2006a). Hull et al. (2002) reported combustion toxicity of ethylene-vinyl acetate copolymer (EVA) reported higher yield of CO and several volatile compounds along with CO2. Islam et al. (2010) reported that the oxidation of polymers produces more than 4000 toxic chemicals, 80 of which are known carcinogens. Matsuoka et al (2005) reported a study on electro oxidation of methanol and glycol and found that electro-oxidation of ethylene glycol at 400mV forms glycolate, oxalate and formate (Figure 8.6). The glycolate was obtained by three-electron oxidation of ethylene glycol, and was an electrochemically active product even at 400mV, which led to the further oxidation of glycolate. Oxalate was found stable, no further oxidation was seen and was termed as nonpoisoning path. The other product of glycol oxidation is called formate which is termed as poisoning path or CO poisoning path. The glycolate formation decreased from 40–18 % and formate increased from 15–20% between 400 and 500mV. Thus, ethylene glycol oxidation produced CO instead of CO2 and follows the poisoning path over 500 mV. The glycol oxidation produces glycol aldehyde as intermediate products. Hence, use of these products in

refining will have several impacts in the end uses, and are not sustainable at all.

Figure 8.6 Ethylene Glycol Oxidation Pathway in Alkaline Solution (After Matsuoka et al., 2005). Glycol ethers are known to produce toxic metabolites such as the teratogenic methoxyacetic acid during biodegradation, the biological treatment of glycol ethers can be hazardous (Fischer and Hahn, 2005). Abiotic degradation experiments with ethylene glycol showed that the byproducts are monoethylether (EGME) and toxic aldehydes, e.g. methoxy acetaldehyde (MALD). Glycol passes into body by inhalation, ingestion or skin. Toxicity of ethylene glycol causes depression and kidney damage (MSDS, 2005). High concentration levels can interfere with the ability of the blood to carry oxygen causing headache and a blue color to the skin and lips (methemoglobinemia), collapse and even death. High exposure may affect the nervous system and may damage the red blood cells leading to anemia (low blood count). During a study of carcinogenetic and toxicity of propylene glycol on animals, the skin tumor incidence was observed (CERHR, 2003). Glycol may form toxic alcohol inside human body if ingested as fermentation may take place. Amines are considered to be toxic chemicals. It was reported that occupational asthma was found in people handling of a cutting fluid containing diethanolamine (Piipari et al., 1998). Toninello (2006) reported that the oxidation products of some biogenic amines appear to be also carcinogenic. DEA also reversibly inhibits phosphatidylcholine synthesis by blocking choline uptake (Lehman-McKeeman and Gamsky, 1999). Systemic toxicity occurs in many tissue types including the nervous system, liver, kidney, and blood system that may cause increased blood pressure, diuresis, salivation, and pupillary dilation. Diethanolamine causes mild skin irritation to the rabbit at concentrations above 5%, and severe ocular irritation at concentrations above 50% (Beyer et al., 1983). Ingestion of diethylamine causes severe gastrointestinal pain, vomiting, and diarrhea, and may result in perforation of the stomach possibly due to the oxidation products and fermentation products.

8.3 Critical Evaluation of Current Petroleum Practices In very short historical time (relative to the history of the environment) the oil and gas industry has become one of the world’s largest economic sectors, a powerful globalizing force with farreaching impacts on the entire planet humans share with the rest of the natural world. Decades

of the continuous growth of oil and gas operations have changed, in some places transformed the natural environment and the way humans have traditionally organized themselves. The petroleum sectors draw huge public attention due to their environmental consequences. All stages of oil and gas operations generate a variety of solids, liquids and gaseous wastes (Currie and Isaacs, 2005; Wenger et al., 2004; Khan and Islam, 2003b; Veil, 2002; de Groot, 1996; Holdway, 2002) harmful to the human and the natural environment. Figure 8.7 shows the current technological practices are focused on short-term, linearized solutions that are also aphenomenal. As a result, technological disaster prevails practically in every aspect of the post-renaissance era. Petroleum practices are considered to be the driver of today’s society. Here, the modern development is essentially dependent on artificial products and processes. We have reviewed the post renaissance transition, calling it the honey-sugar-saccharineaspartame (HSSA) syndrome. In this allegorical transition, honey (with a real source and process) has been systematically replaced by Aspartame that has both source and pathway that are highly artificial. This sets in motion the technology development mode that Nobel Laureate in Chemistry-Robert Curl called “technological disaster”.

Figure 8.7 Schematic showing the position of current technological practices related to natural practices. By now, it has been recognized the unpleasant truth that the present natural resource management regime governing activities such as petroleum operations has failed to ensure environmental safety and ecosystem integrity. The main reason for this failure is that the existing management scheme is not sustainable. Under the present management approach, development activities are allowed so long as they promise economic benefit. Once that is likely, management guidelines are set to justify the project’s acceptance. Sustainable petroleum operations development requires a sustainable supply of clean and affordable energy resources that do not cause negative environmental, economic, and social

consequences (Dincer and Rosen 2004, 2005). In addition, it should consider a holistic approach where the whole system will be considered instead of just one sector at a time (Islam et al., 2010). In 2007, Khan and Islam developed an innovative criterion for achieving true sustainability in technological development. New technology should have the potential to be efficient and functional far into the future in order to ensure true sustainability. Sustainable development is seen as having four elements– economic, social, environmental, and technological.

8.3.1 Management Conventional management of petroleum operations are being challenged due to the environmental damages caused by its operations. Moreover, the technical picture of the petroleum operations and management is very grim (Deakin and Konzelmann, 2004). The ecological impacts of petroleum discharges including habitat destruction and fragmentation, recognized as major concerns associated with petroleum and natural gas developments in both terrestrial and aquatic environments. There is clearly a need to develop a new management approach in hydrocarbon operations. This approach will have to be environmentally acceptable, economically profitable and socially responsible. These problems might be solved/overcome by the application of new technologies which guarantee sustainability. Figure 8.8 shows the different phases of petroleum operations which are seismic, drilling, production, transportation and processing, and decommissioning, as well as their associated wastes generation and energy consumption. Various types of waste from ships, emission of CO2, human related waste, drilling mud, produced water, radioactive materials, oil spills, release of injected chemicals, toxic release used as corrosion inhibitors, metals and scraps, flare etc. are produced during the petroleum operations. Even though, petroleum companies make billions of dollars profit in their operations each year, these companies take no responsibilities of various waste generated. Hence, in overall, the society is deteriorated due to such environmental damages. Until the mesh created by petroleum operations are rendered to environmental friendly operation, the society as a whole will not be benefited from such valuable natural resources.

Figure 8.8 Different phases of petroleum operations which are seismic, drilling, production, transportation & processing and decommissioning, and their associated wastes generation and energy consumption (Khan and Islam, 2006a). Khan and Islam (2007) introduced a new approach by means of which it is possible to develop a truly sustainable technology. Under this approach, the temporal factor is considered the prime indicator in sustainable technology development. Khan and Islam (2007) discussed some implications of how the current management model for exploring, drilling, managing wastes, refining and transporting, and using the by-products of petroleum has been lacking in foresight, and suggests the beginnings of a new management approach. A common practice among all oil producing companies is to burn off any unwanted gas that liberates from oil during production. This process ensures the safety of the rig by reducing the pressures in the system that result from gas liberation. This gas is of low quality and contains many impurities and by burning off the unwanted gas, toxic particles are released into the atmosphere. Acid rain, caused by sulfur oxides in the atmosphere, is one of the main environmental hazards resulting from this process. Moreover, the flaring of natural gas accounts for approximately a quarter of the petroleum industries emissions (UKOOA, 2003). At present, flaring of gases onsite and disposing of liquid and solid containing less than a certain concentration of hydrocarbon are allowed.

8.3.2 Current Practices in Exploration, Drilling and Production Seismic exploration is examined for the preliminary investigation of geological information in the study area and is considered to be the safest among all other activities in petroleum operations, with little or negligible negative impacts on the environment (Diviacco 2005;

Davis et al. 1998). However, several studies have shown that it has several adverse environmental impacts (Jepson et al. 2003; Khan and Islam 2007). Most of the negative effects are from the intense sound generated during the survey. Seismic surveys can cause direct physical damage to a fish. High pressure sound waves can damage the hearing system, swim bladders, and other tissues/systems. These effects might not directly kill the fish, but they may lead to reduced fitness, which increases their susceptibility to predation and decreases their ability to carry out important life processes. There might be indirect effects from seismic operations. If the seismic operation disturbs the food chain/web, then it will cause adverse impacts on fish and total fisheries. The physical and behavioral effects on fish from seismic operations are discussed in the following sections. It has also been reported that seismic surveys cause behavioral effects among fish. For example, startle response, change in swimming patterns (potentially including change in swimming speed and directional orientation), and change in vertical distribution are some of the effects. These effects are expected to be short-term, with duration of effect less than or equal to the duration of exposure, they are expected to vary between species and individuals, and be dependent on properties of received sound. The ecological significance of such effects is expected to be low, except where they influence reproductive activity. Some studies of the effects of seismic sound on eggs and larvae or on zooplankton were found. Other studies showed that exposure to sound may arrest development of eggs, and cause developmental anomalies in a small proportion of exposed eggs and/or larvae; however these results occurred at numbers of exposures much higher than are likely to occur during field operation conditions, and at sound intensities that only occur within a few meters of the sound source. In general, the magnitude of mortality of eggs or larvae that could result from exposure to seismic sound predicted by models would be far below that which would be expected to affect populations. Similar physical, behavioral and physiological effects in the invertebrates are also reported. Marine turtles and mammals are also significantly affected due to seismic activities. The essence of all exploration activities hinges upon the use of some form of wave that would depict subsurface structures. It is important to note that practically all such techniques use artificial waves, generated from sources of variable level of radiation. Recently, Himpsel (2007) presented a correlation between the energy levels and the wave length of photon energy (Figure 8.9). It is shown that the energy level of photon decreases with the increase in wave length. The sources that generate waves that penetrate deep inside the formation are more likely to be of high-energy level, hence more hazardous to the environment.

Figure 8.9 Schematic of wave length and energy level of photon (From Islam et al., 2010). Table 8.3 shows the quantum energy level of various radiation sources. The γ-rays which have the least wave length have the highest quantum energy levels. In terms of intensity, γ-rays have highest energy intensity among others. More energy is needed to produce this radiation whether to use for drilling or any other application. For instance, laser drilling, which is considered to be the wave of the future, will be inherently toxic to the environment. Table 8.3 Wave length and quantum energy levels of different radiation sources (From Islam et al., 2015). Radiation Infrared Visible Ultraviolet X-rays

Wave length Quantum energy 1 mm-750 nm 0.0012–1.65 eV 750–400 nm 1.65–3.1 eV 400 nm-10 nm 3.1–124 eV 10 nm 124 eV

γ-rays

10–12 m

1 MeV

Drilling and production activities have also adverse effects on the environment in several ways. For example, blow-out and flaring of produced gas waste energy, carbon dioxide emissions into the atmosphere, and careless disposal of drilling mud and other oily materials, can have a toxic effect on terrestrial and marine life. Before drilling and production operations are allowed to go ahead, the Valued Ecosystem Component (VEC) level impact assessment should be done to establish the ecological and environmental conditions of the area proposed for development and assess the risks to the environment from the development. Bjorndalen et al. (2005) developed a novel approach to avoid flaring during petroleum operations. Petroleum products contain materials in various phases. Solids in the form of fines, liquid hydrocarbon, carbon dioxide, and hydrogen sulfide are among the many substances

found in the products. According to Bjorndalen et al. (2005), by separating these components through the following steps, no-flare oil production can be established (Figure 8.10). Simply by avoiding flaring, over 30% of pollution created by petroleum operation can be reduced. Once the components for no-flaring have been fulfilled, value added end products can be developed. For example, the solids can be used for minerals, the brine can be purified, and the low-quality gas can be re-injected into the reservoir for EOR.

Figure 8.10 Breakdown of the no-flaring method (Bjorndalen et al., 2005).

8.3.3 Challenges in Waste Management Drilling and production phases are the most waste-generating phases in petroleum operations. Drilling mud are condensed liquids that may be oil- or synthetic-based wastes, and contain a variety of chemical additives and heavy minerals that are circulated through the drilling pipe to perform a number of functions. These functions include cleaning and conditioning the hole, maintaining hydrostatic pressure in the well, lubrication of the drill bit and counterbalance formation pressure, removal of the drill cuttings, and stabilization the wall of the drilling hole. Water-based muds (WBMs) are a complex blend of water and bentonite. Oil-based muds (OBMs) are composed of mineral oils, barite, mineral oil, and chemical additives. Typically, a single well may lead to 1000–6000 m3 of cuttings and muds depending on the nature of cuttings, well depths, and rock types (CEF, 1998). A production platform generally consists of 12 wells, which may generate (62 × 5000 m3) 60,000 m3 of wastes (Patin 1999; CEF 1998). Figure 8.11 shows the supply chain of petroleum operation indicating the type of wastes generated.

Figure 8.11 Supply chain of petroleum operations (Khan and Islam, 2006a). The current challenge of petroleum operation is how to minimize the petroleum wastes and its impact in the long-term. Conventional drilling and production methods generate an enormous amount of wastes (Veil 1998; EPA, 2000). Existing management practices are mainly focused to achieve sectoral success and are not coordinated with other operations surrounding the development site. The following are the major wastes generated during drilling and production. a. Drilling muds b. Produced water c. Produced sand d. Storage displacement water e. Bilge and ballast water f. Deck drainage g. Well treatment fluids h. Naturally occurring radioactive materials i. Cooling water j. Desalination brine k. Other assorted wastes

8.3.4 Problems in Transportation Operations The most significant problems in production and transportation operations are reported by Khan and Islam (2007a). Toxic means are used to control corrosion. Billions of dollars are spent to add toxic agents so that microbial corrosion can be prevented. Other applications include the use of toxic paints, cathodic protection, etc. all causing irreversible damage to the environment. In addition to this, huge amount of toxic chemicals (at times up to 40% of the gas stream) are injected in order to reduce moisture content of a gas stream in order to prevent hydrate formation. Even if 1% of these chemicals remain in the gas stream, the gas becomes vulnerable to severe toxicity, particularly when it is burnt, contaminating the entire pathway of gas usage (Chhetri and Islam, 2006a). Toxic solvents are used for preventing asphaltene plugging problems. Toxic resins are used for sand consolidation to preventing sand production.

8.4 Greening of Petroleum Operations 8.4.1 Effective Separation of Solid from Liquid Organic waste products such as, cattle manure, slaughter house waste, vegetable waste, fruit peels and pits, dried leaves and natural fibers, wood ash, natural rocks (limestone, zeolite, siltstone, etc.) are all viable options for the separation of fines and oil. In 1999, a patent was issued to Titanium Corporation for separation of oil from sand tailings (Allcock et al., 1999). However, this technique uses chemical systems, such as NaOH and H2O2, which are both expensive and environmentally hostile. Drilling wastes have been found to be beneficial in highway construction (Wasiuddin et al., 2002). Studies have shown that the tailings from oil sands are high in various mineral contents. To extract the minerals, usually the chemical treatment is used to modify the surface of minerals. Treatment with a solution derived from natural material has a great potential. Also, microwave heating has the potential to enhance selective floatability of different particles. This aspect has been studied by Gunal and Islam (2000). Temperature can be a major factor in the reaction kinetics of a biological solvent with mineral surfaces. Various metals respond in a different manner under microwave condition, which can make significant change in floatability. The recovery process can be completed through transferring the microwave-treated fines to a flotation chamber (Henda et al., 2005). Finally, atomic absorption spectrometer can characterize the flotation products. Application of bio membranes to separate solid from liquid has also given considerable attention recently (Kota, 2012). Even though, synthetic membranes are being used for some application, they are highly energy intensive to produce, toxic and costly.

8.4.2 Effective Separation of Liquid from Liquid The current practice involves separation of oil and water and the water is disposed as long as it has a hydrocarbon concentration below an allowable limit. However, it is expected that such practice cannot be sustained and further purification of water is necessary. Consequently, this task involves on both separation of oil and water and heavy metals removal from the water.

The oil-water emulsion has been produced and it was found that the selected paper fibre material gives 98–99% recovery of oil without producing any water (Khan and Islam, 2006). An emulsion made up of varying ratios of oil and water was utilized in different sets of experiments. Almost all the oil-water ratios gave the same separation efficiency with the material used. The mentioned paper material, which is made of long fibrous wood pulp treated with water proofing material as filtering medium. Water proofing agent for paper used in these experiments is “Rosin Soap” (rosin solution treated with caustic soda). This soap is then treated with alum to keep the ph of the solution within a range of 4~5. Cellulose present in the paper reacts reversibly with rosin soap in presence of alum and form a chemical coating around the fibrous structure and acts as coating to prevent water to seep through it. This coating allows long chain oil molecules to pass making the paper a good conductor for oil stream. Because of the reversible reaction, it was also observed the performance of filter medium increases with the increased acidity of the emulsion and vice versa. As well, the filter medium is durable to make its continuous use for a long time keeping the cost of replacement and production cut-down. The material used as filtering medium is environmental friendly and useful for down-hole conditions. Further, other paper-equivalent alternate materials, which are inexpensive and down-hole environment appealing, can be used for oil-water separation. Human hair has been proven to be effective in separation of oil and water as well as heavy metal removal. Similarly, natural zeolites can also effectively function to separate liquid-liquid from different solutions. Such zeolites can adsorb some liquid leaving others to separate depending on their molecular weight.

8.4.3 Effective Separation of Gas from Gas Most of the hydrocarbons found in natural gas wells are complex mixtures of hundreds of different compounds. A typical natural gas stream is a mixture of methane, ethane, propane, butane and other hydrocarbons, water vapor, oil and condensates, hydrogen sulfides, carbon dioxide, nitrogen, some other gases and solid particles. The free water and water vapors are corrosive to the transportation equipment. Hydrates can plug the gas accessories creating several flow problems. Other gas mixtures such as hydrogen sulfide and carbon dioxide are known to lower the heating value of natural gas by reducing its overall fuel efficiency. There are certain restrictions imposed on major transportation pipelines on the make-up of the natural gas that is allowed into the pipeline called pipe ‘line quality’ gas. This makes mandatory that natural gas is purified before it is sent to transportation pipelines. The gas processing is aimed at preventing corrosion, environmental and safety hazards associated with transport of natural gas. The presence of water in natural gas creates several problems. Liquid water and natural gas can form solid ice-like hydrates that can plug valves and fittings in the pipeline (Nallinson, 2004). Natural gas containing liquid water is corrosive, especially if it contains carbon dioxide and hydrogen sulfide. Water vapor in natural gas transport systems may condense causing a sluggish flow. Hence, the removal of free water, water vapors, and condensates is a very important step during gas processing. Other impurities of natural gas, such as carbon

dioxide and hydrogen sulfide generally called as acid gases must be removed from the natural gas prior to its transportation (Chakma, 1999). Hydrogen sulfide is a toxic and corrosive gas which is rapidly oxidized to form sulfur dioxide in the atmosphere (Basu et al., 2004). Oxides of nitrogen found in traces in the natural gas may cause ozone layer depletion and global warming. Hence, an environment-friendly gas processing is essential in order for greening the petroleum operations.

8.4.4 Natural Substitutes for Gas Processing Chemicals (Glycol and Amines) Glycol is one of the most important chemicals used during the dehydration of natural gas. In search of the cheapest, most abundantly available and cheap material, clay has been considered as one of the bets substitute of toxic glycol. Clay is a porous material containing various minerals such as silica, alumina, and several others. Low et al. (2003) reported that the water absorption characteristics of sintered sawdust clay can be modified by the addition of saw dust particles to the clay. The dry clay as a plaster has water absorption coefficient of 0.067–0.075 (kg/m2S1/2) where weight of water absorbed is in kg, surface area in square meter and time in second (Website 1). The preliminary experimental results have indicated that clay can absorb considerable amount of water vapor and can be efficiently used in dehydration of natural gas (Figure 8.12). Moreover, glycol can be obtained from some natural source, which is not toxic as synthetic glycol. Glycol can be extracted from Tricholoma Matsutake (mushroom) which is an edible fungus (Ahn and Lee, 1986). Ethylene glycol is also found as a metabolite of ethylene which regulates the natural growth of the plant (Blomstrom and Beyer, 1980). Orange peel oils can replace this synthetic glycol. These natural glycols derived without using non-organic chemicals can replace the synthetic glycols. Recent work of Miralai et. al. (2006) have demonstrated that such considerations are vital.

Figure 8.12 Water vapor absorption by Nova Scotia clay (Chhetri and Islam, 2008). Amines are used in natural gas processing to remove H2S and CO2. Monoethanolamine (MEA), DEA and TEA are the members of alkanol-amine compound. These are synthetic chemicals the toxicity of which has been discussed earlier. If these chemicals are extracted from natural sources, such toxicity is not expected. Monoethanolamine is found in the hemp oil which is extracted from the seeds of hemp (Cannabis Sativa) plant. 100 grams of hemp oil

contain 0.55 mg of Monoethanolamine (Chhetri and Islam, 2008). Moreover, an experimental study showed that olive oil and waste vegetable oil can absorb sulfur dioxide. Figure 8.13 indicates the decrease in pH of de-ionized water with time. This could be a good model to remove sulfur compounds from the natural gas streams. Calcium hydroxides can also be utilized to remove CO2 from the natural gas.

Figure 8.13 Decrease of pH with time due to sulfur absorption in de-ionized water (Chhetri and Islam, 2008).

8.4.5 Membranes and Absorbents Various types of synthetic membranes are in use for the gas separation. Some of them are liquid membranes and some are polymeric. Liquid membranes operate by immobilizing a liquid solvent in a microporous filter or between polymer layers. A high degree of solute removal can be obtained when using chemical solvents. When the gas or solute reacts with the liquid solvent in the membrane, the result is an increased liquid phase diffusivity. This leads to an increase in the overall flux of the solute. Furthermore, solvents can be chosen to selectively remove a single solute from a gas stream to improve selectivity (Astrita et al. 1983). Saha and Chakma (1992) suggested the attachment of a liquid membrane in a microporous polymeric membrane. They immobilized mixtures of various amines such as monoethanolamine (MEA), diethanolamine (DEA), amino-methyl-propanol (AMP), and polyethylene glycol (PEG) in a microporous polypropylene film and placed it in a permeator. They tested the mechanism for the separation of carbon dioxide from some hydrocarbon gases and obtained separation factors as high as 145. Polymeric membranes have been developed for a variety of industrial applications, including gas separation. For gas separation, the selectivity and permeability of the membrane material determines the efficiency of the gas separation process. Based on flux density and selectivity, a membrane can be classified broadly into two classes, porous and nonporous. A porous membrane is a rigid, highly voided structure with randomly distributed interconnected pores. The separation of materials by a porous membrane is mainly a function of the permeate character and membrane properties, such as the molecular size of the membrane polymer, pore size, and pore-size distribution. A porous membrane is similar in its structure and function to the conventional filter. In general, only those molecules that differ considerably in size can be

separated effectively by microporous membranes. Porous membranes for gas separation do exhibit high levels of flux but inherit low selectivity values. However, synthetic membranes are not as environment-friendly as biodegradable bio membranes. The efficiency of polymeric membranes decreases with time due to fouling, compaction, chemical degradation, and thermal instability. Because of this limited thermal stability and susceptibility to abrasion and chemical attack, polymeric membranes have found application in separation processes where hot reactive gases are encountered. This has resulted in a shift of interest toward inorganic membranes. Inorganic membranes are increasingly being explored to separate gas mixtures. Besides having appreciable thermal and chemical stability, inorganic membranes have much higher gas fluxes when compared to polymeric membranes. There are basically two types of inorganic membranes, dense (nonporous) and porous. Examples of commercial porous inorganic membranes are ceramic membranes, such as alumina, silica, titanium, glass, and porous metals, such as stainless steel and silver. These membranes are characterized by high permeabilities and low selectivities. Dense inorganic membranes are specific in their separation behaviours, for example, Pd-metal based membranes are hydrogen specific and metal oxide membranes are oxygen specific. Palladium and its alloys have been studied extensively as potential membrane materials. Air Products and Chemical Inc. developed the Selective Surface Flow (SSF) membrane. It consists of a thin layer (2–3 nm) of nano-porous carbon supported on a macroporous alumina tube (Rao et al. 1992). The effective pore diameter of the carbon matrix is 5–7 A (Rao and Sircar 1996). The membrane separates the components of a gas mixture by a selective adsorption-surface diffusion-desorption mechanism (Rao and Sircar 1993). A variety of bio-membranes are also in use today. These membranes such as human hair can be used instead of synthetic membranes for gas-gas separation (Basu et al., 2004; Akhter, 2002). The use of human hair as bio-membrane has been illustrated by (Khan and Islam, 2006a). Initial results indicated that human hairs have characteristics similar to hollow fibre cylinders, but are even more effective because of the flexible nature and a texture that can allow the use of a hybrid system through solvent absortion along with mechanical separation. Natural absorbents such as silica gels can also be used for absorbing various contaminants from the natural gas stream. Khan and Islam (2006a) showed that synthetic membranes can be replaced by simple paper membranes for oil water separation. Moreover, limestone has the potential to separate sulfur dioxide from natural gas (Akhter, 2002). When caustic soda is combined with wood ash, it was found to be an alternative to zeolite. Since caustic soda is a chemical, waste materials such as okra extract can be a good substitute. The same technique can be used with any type of exhaust, large (power plants) or small (cars). Once the gas is separated, low quality gas can then be injected into the reservoir for enhanced oil recovery technique. This will enhance the system efficiency. Moreover, low quality can be converted into power by a turbine. Bjorndalen et al. (2005) developed a comprehensive scheme for the separation of petroleum products in different form using novel materials with value addition of the byproducts.

8.4.6 A Novel Desalination Technique

Management of produced water during petroleum operations offers a unique challenge. The concentration of this water is very high and cannot be disposed of outside. In order to bring down the concentration, expensive and energy-intensive techniques are being practiced. Recently, Khan et al. (2006) and Islam (2012; 2016) have developed a novel desalination technique that can be characterized as totally environment-friendly process. This process uses no non-organic chemical (e.g. membrane, additives). This process relies on the following chemical reactions in four stages: 1. saline water + CO2 + NH3 → (2) precipitates (valuable chemicals) + desalinated water → (3) plant growth in solar aquarium → (4) further desalination This process is a significant improvement over an existing US patent. The improvements are in the following areas: – CO2 source is exhaust of a power plant (negative cost) – NH3 source is sewage water (negative cost + the advantage of organic origin) – Addition of plant growth in solar aquarium (emulating the world’s first and the biggest solar aquarium in New Brunswick, Canada). This process works very well for general desalination involving sea water. However, for produced water from petroleum formations, it is common to encounter salt concentration much higher than sea water. For this, water plant growth (Stage 3 above) is not possible because the salt concentration is too high for plant growth. In addition, even Stage 1 does not function properly because chemical reactions slow down at high salt concentrations. This process can be enhanced by adding an additional stage. The new process should function as: 1. Saline water + ethyl alcohol → (2) saline water + CO2 + NH3 → (3) precipitates (valuable chemicals) + desalinated water → (4) plant growth in solar aquarium → (5) further desalination Care must be taken, however, to avoid using non-organic ethyl alcohol. Further value addition can be performed if the ethyl alcohol is extracted from fermented waste organic materials.

8.4.7 A Novel Refining Technique Khan and Islam (2007) have identified the following sources of toxicity in conventional petroleum refining: Use of toxic catalyst Use of artificial heat (e.g. combustion, electrical, nuclear) The use of toxic catalysts contaminates the pathway irreversibly. These catalysts should be replaced by natural performance enhancers. Such practices have been proposed by Chhetri and Islam (2008) in the context of biodiesel. In this proposed project, research will be performed in order to introduce catalysts that are available in their natural state. This will make the process environmentally acceptable and will reduce to the cost very significantly.

The problem associated with efficiency is often covered up by citing local efficiency of a single component (Islam et al., 2006). When global efficiency is considered, artificial heating proves to be utterly inefficient (Khan and Islam, 2012; Chhetri and Islam, 2008). Recently, Khan and Islam (2016) have demonstrated that direct heating with solar energy (enhanced by a parabolic collector) can be very effective and environmentally sustainable. They achieved up to 75% of global efficiency as compared to some 15% efficiency when solar energy is used through electricity conversion. They also discovered that the temperature generated by the solar collector can be quite high, even for cold countries. In hot climates, the temperature can exceed 300° C, making it suitable for thermal cracking of crude oil. In this project, the design of a direct heating refinery with natural catalysts will be completed. Note that the direct solar heating or wind energy doesn’t involve the conversion into electricity that would otherwise introduce toxic battery cells and would also make the overall process very low in efficiency.

8.4.8 Use of Solid Acid Catalyst for Alkylation Refiners typically use either hydrofluoric acid (HF), which can be deadly if spilled, or sulfuric acid, which is also toxic and increasingly costly to recycle. Refineries can use a solid acid catalyst, unsupported and supported forms of heteropolyacids and their cation exchanged salts, which has recently proved effective in refinery alkylation. A solid acid catalyst for alkylation is less widely dispersed into the environment compared to HF. Changing to a solid acid catalyst for alkylation would also promote more safety at a refinery. Solid acid catalysts are an environment-friendly replacement for liquid acids, used in many significant reactions, including alkylation of light hydrocarbon gases to form iso-octane (alkylate) used in reformulated gasoline. Use of organic acids and enzymes for various reactions is to be promoted. The catalysts that are in use today are very toxic and wasted after a series of use. This will create pollution to the environment, so using catalysts with fewer toxic materials significantly reduces pollution. The use of Nature-based catalysts such as zeolites, alumina, and silica should be promoted. Various biocatalyst and enzymes, which are nontoxic and from renewable origin, are to be considered for future use.

8.4.9 Use of Bacteria to Breakdown Heavier Hydrocarbons to Lighter Ones Since the formation of crude oil is the decomposition of biomass by bacteria at high temperature and pressure, there must be some bacteria that can effectively break down the crude oil into lighter products. A series of investigations are necessary to observe the effect of bacteria on the crude oil.

8.4.10 Use of Cleaner Crude Oil Crude oil itself is comparatively cleaner than distillates as it contains less sulfur and toxic metals. The use of crude oil for various applications is to be promoted. This will not only help to maintain the environment because of its less toxic nature but also be less costly as it avoids

expensive catalytic refining processes. Recently, the direct use of crude oil is of great interest. Several studies have been conducted to investigate the electricity generation from saw dust (Sweis, 2004; Venkataraman et al., 2004; Calle et al., 2005). Figure 8.14 shows the schematic of a scaled model developed by our research group in collaboration with Veridity Environmental Technologies (Halifax, Nova Scotia). A raw sawdust silo is equipped with a powered auger sawdust feeder. The saw dust is inserted inside another feeding chamber that is equipped with a powered grinder that pulverizes sawdust into wood flour. The chamber is attached to a heat exchanger that dries the saw dust before it enters into the grinder. The wood flour is fed into the combustion chamber with a powered auger wood flour feeder. The pulverization of sawdust increases the surface area of the particles very significantly. The additional energy required to run the feeder and the grinder is provided by the electricity generated by the generator itself, requiring no additional energy investment. In addition, the pulverization chamber is also used to dry the saw dust. The removal of moisture increases flammability of the feedstock. The combustion chamber itself is equipped with a start-up fuel injector that uses biofuel. Note that initial temperature required to startup the combustion chamber is quite high and cannot be achieved without a liquid fuel. The exhaust of the combustion chamber is circulated through a heat exchanger in order to dry sawdust prior to pulverization. As the combustion fluids escape the combustion chamber, they turn the drive shaft blades rotate to turn the drive shaft, which in turn, turns the compressor turbine blades. The power generator is placed directly under the main drive shaft.

Figure 8.14 Schematic of sawdust fuelled electricity generator. Fernandes and Brooks (2003) compared black carbon (BC) derived from various sources. One interesting feature of this study was that they studied the impact of different sources on the composition, extractability and bioavailability of resulting BC. By using molecular fingerprints, the concluded that fossil BC may be more refractory than plant derived BC. This is an important finding as only recently there has been some advocacy that BC from fossil fuel may have cooling effect, nullifying the contention that fossil fuel burning is the biggest contributor to global warming. It is possible that BC from fossil fuel has higher refractory

ability, however, there is no study available to date to quantify the cooling effect and to determine the overall effect of BC from fossil fuel. As for the other effects, BC from fossil appear to be on the harmful side as compared to BC from organic matters. For instance, vegetarian fire residues, straw ash and wood charcoals had only residual concentrations of nalkanes (