Another great book from Friedman that goes in a slightly different direction than his others. In addition to providing an interesting portrait of the Another great book from Friedman that goes in a slightly different direction than his others. In addition to providing an interesting portrait of the future of technology, society, and the resulting economic difficulties, he also advocates for the development of more social capital (trust). I've included some of the highlights I found that captured the most important points of the book. These are just from the first half or so since Goodreads limits how much you can include in a review.
In his sobering book Sabbath, the minister and author Wayne Muller observes how often people say to him, “I am so busy.” “We say this to one another with no small degree of pride,” Muller writes, “as if our exhaustion were a trophy, our ability to withstand stress a mark of real character … To be unavailable to our friends and family, to be unable to find time for the sunset (or even to know when the sun has set at all), to whiz through our obligations without time for a single, mindful breath, this has become a model of a successful life.” I’d rather learn to pause.
There are vintage years in wine and vintage years in history, and 2007 was definitely one of the latter. Because not just the iPhone emerged in 2007—a whole group of companies emerged in and around that year. Together, these new companies and innovations have reshaped how people and machines communicate, create, collaborate, and think. In 2007, storage capacity for computing exploded thanks to the emergence that year of a company called Hadoop, making “big data” possible for all. In 2007, development began on an open-source platform for writing and collaborating on software, called GitHub, that would vastly expand the ability of software to start, as Netscape founder Marc Andreessen once put it, “eating the world.” On September 26, 2006, Facebook, a social networking site that had been confined to users on college campuses and at high schools, was opened to everyone at least thirteen years old with a valid e-mail address, and started to scale globally. In 2007, a micro-blogging company called Twitter, which had been part of a broader start-up, was spun off as its own separate platform and also started to scale globally. Change.org
the most popular social mobilization website, emerged in 2007. In late 2006, Google bought YouTube, and in 2007 it launched Android, an open-standards platform for devices that would help smartphones scale globally with an alternative operating system to Apple’s iOS. In 2007, AT&T, the iPhone’s exclusive connectivity provider, invested in something called “software-enabled networks”—thus rapidly expanding its capacity to handle all the cellular traffic created by this smartphone revolution. According to AT&T, mobile data traffic on its national wireless network increased by more than 100,000 percent from January 2007 through December 2014. Also in 2007, Amazon released something called the Kindle, onto which, thanks to Qualcomm’s 3G technology, you could download thousands of books anywhere in the blink of an eye, launching the e-book revolution. In 2007, Airbnb was conceived in an apartment in San Francisco. In late 2006, the Internet crossed one billion users worldwide, which seems to have been a tipping point. In 2007, Palantir Technologies, the leading company using big data analytics and augmented intelligence to, among other things, help the intelligence community find needles in haystacks, launched its first platform. “Computing power and storage reached a level that made it possible for us to create an algorithm that could make a lot of sense out of things we could not make sense of before,” explained Palantir’s cofounder Alexander Karp. In 2005, Michael Dell decided to relinquish his job as CEO of Dell and step back from the hectic pace and just be its chairman. Two years later he realized that was bad timing. “I could see that the pace of change had really accelerated. I realized we could do all this different stuff. So I came back to run the company in … 2007.”
So a few years later, I began updating in earnest my view of how the Machine worked. A crucial impetus was a book I read in 2014 by two MIT business school professors—Erik Brynjolfsson and Andrew McAfee—entitled The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. The first machine age, they argued, was the Industrial Revolution, which accompanied the invention of the steam engine in the 1700s. This period was “all about power systems to augment human muscle,” explained McAfee in an interview, “and each successive invention in that age delivered more and more power. But they all required humans to make decisions about them.” Therefore, the inventions of that era actually made human control and labor “more valuable and important.” Labor and machines were, broadly speaking, complementary, he added. In the second machine age, though, noted Brynjolfsson, “we are beginning to automate a lot more cognitive tasks, a lot more of the control systems that determine what to use that power for. In many cases today artificially intelligent machines can make better decisions than humans.” So humans and software-driven machines may increasingly be substitutes, not complements.
Indeed, the good news is that we’ve gotten a little bit faster at adapting over the centuries, thanks to greater literacy and knowledge diffusion. “The rate at which we can adapt is increasing,” said Teller. “A thousand years ago, it probably would have taken two or three generations to adapt to something new.” By 1900, the time it took to adapt got down to one generation. “We might be so adaptable now,” said Teller, “that it only takes ten to fifteen years to get used to something new.” Alas, though, that may not good enough. Today, said Teller, the accelerating speed of scientific and technological innovations (and, I would add, new ideas, such as gay marriage) can outpace the capacity of the average human being and our societal structures to adapt and absorb them. With that thought in mind, Teller added one more thing to the graph—a big dot. He drew that dot on the rapidly sloping technology curve just above the place where it intersected with the adaptability line. He labeled it: “We are here.” The graph, as redrawn for this book, can be seen on the next page. That dot, Teller explained, illustrates an important fact: even though human beings and societies have steadily adapted to change, on average, the rate of technological change is now accelerating so fast that it has risen above the average rate at which most people can absorb all these changes. Many of us cannot keep pace anymore. “And that is causing us cultural angst,” said Teller. “It’s also preventing us from fully benefiting from all of the new technology that is coming along every day.
If the technology platform for society can now turn over in five to seven years, but it takes ten to fifteen years to adapt to it, Teller explained, “we will all feel out of control, because we can’t adapt to the world as fast as it’s changing. By the time we get used to the change, that won’t even be the prevailing change anymore—we’ll be on to some new change.” That is dizzying for many people, because they hear about advances such as robotic surgery, gene editing, cloning, or artificial intelligence, but have no idea where these developments will take us. “None of us have the capacity to deeply comprehend more than one of these fields—the sum of human knowledge has far outstripped any single individual’s capacity to learn—and even the experts in these fields can’t predict what will happen in the next decade or century,” said Teller. “Without clear knowledge of the future potential or future unintended negative consequences of new technologies, it is nearly impossible to draft regulations that will promote important advances—while still protecting ourselves from every bad side effect.” In other words, if it is true that it now takes us ten to fifteen years to understand a new technology and then build out new laws and regulations to safeguard society, how do we regulate when the technology has come and gone in five to seven years? This is a problem. Another big challenge is the way we educate our population. We go to school for twelve or more years during our childhoods and early adulthoods, and then we’re done. But when the pace of change gets this fast, the only way to retain a lifelong working capacity is to engage in lifelong learning. There is a whole group of people—judging from the 2016 U.S. election—who “did not join the labor market at age twenty thinking they were going to have to do lifelong learning,” added Teller, and they are not happy about it. All of these are signs “that our societal structures are failing to keep pace with the rate of change,” he said. Everything feels like it’s in constant catch-up mode. What to do? We certainly don’t want to slow down technological progress or abandon regulation. The only adequate response, said Teller, “is that we try to increase our society’s ability to adapt.” That is the only way to release us from the society-wide anxiety around tech. “We can either push back against technological advances,” argued Teller, “or we can acknowledge that humanity has a new challenge: we must rewire our societal tools and institutions so that they will enable us to keep pace. The first option—trying to slow technology—may seem like the easiest solution to our discomfort with change, but humanity is facing some catastrophic environmental problems of its own making, and burying our heads in the sand won’t end well. Most of the solutions to the big problems in the world will come from scientific progress.” If we could “enhance our ability to adapt even slightly,” he continued, “it would make a significant difference.” Enhancing humanity’s adaptability, argued Teller, is 90 percent about “optimizing for learning”—applying features that drive technological innovation to our culture and social structures. Every institution, whether it is the patent office, which has improved a lot in recent years, or any other major government regulatory body, has to keep getting more agile—it has to be willing to experiment quickly and learn from mistakes. Rather than expecting new regulations to last for decades, it should continuously reevaluate the ways in which they serve society. Universities are now experimenting with turning over their curriculum much faster and more often to keep up with the change in the pace of change—putting a “use-by date” on certain courses. Government regulators need to take a similar approach. And now those sensors are churning out insights at a level of granularity we have never had before. When all of these sensors transmit their data to centralized data banks, and then increasingly powerful software applications look for the patterns in that data, we can suddenly see weak signals before they become strong ones, and we can see patterns before they cause problems. Those insights can then be looped back for preventive action—when we empty the garbage bins at the optimal moment or adjust the pressure in a fire hydrant before a costly blowout, we are saving time, money, energy, and lives and generally making humanity more efficient than we ever imagined we could be. “The old approach was called ‘condition-based maintenance’—if it looks dirty, wash it,” explained Ruh. “Preventive maintenance was: change the oil every six thousand miles, whether you drive it hard or not.” The new approach is “predictive maintenance” and “prescriptive maintenance.” We can now predict nearly the exact moment when a tire, engine, car or truck battery, turbine fan, or widget needs to be changed, and we can prescribe the exact detergent that works best for that particular engine operating under different circumstances.
“It turns out that there is a simple secret of when the cow is in heat—the number of steps she takes picks up,” said Sirosh. “That is when AI [artificial intelligence] meets AI [artificial insemination].” Having this system at their fingertips made the farmers more productive not only in expanding their herds—“you get a huge improvement in conception rates,” said Sirosh—but also in saving time: it liberated them from having to rely on their own eyes, instincts, expensive farm labor, or the Farmers’ Almanac to identify cows in heat. They could use the labor savings for other productive endeavors. Latanya Sweeney, the then chief technology officer for the Federal Trade Commission, explained on National Public Radio on June 16, 2014, how sensing and software are transforming retail: “What a lot of people may not realize is that, in order for your phone to make a connection on the Internet, it’s constantly sending out a unique number that’s embedded in that phone, called the MAC address, to say, ‘Hey, any Wi-Fis out there?’ … And by using these constant probe requests by the phone looking for Wi-Fis, you could actually track where that phone has been, how often that phone comes there, down to a few feet.” Retailers now use this information to see what displays you lingered over in their stores and which ones tempted you to make a purchase, leading them to adjust displays regularly during the day. But that’s not the half of it—big data now allows retailers to track who drove by which billboard and then shopped in one of their stores. “Google described a way to easily harness lots of affordable computers,” said Cutting. “They did not give us the running source code, but they gave us enough information that a skilled person could reimplement it and maybe improve on it.” And that is precisely what Hadoop did. Its algorithms made hundreds of thousands of computers act like one giant computer. So anyone could just go out and buy commodity hardware in bulk and storage in bulk, run it all on Hadoop, and presto, do computation in bulk that produced really fine-grained insights. Soon enough, Facebook and Twitter and LinkedIn all started building on Hadoop. And that’s why they all emerged together in 2007! It made perfect sense. They had big amounts of data streaming through their business, but they knew that they were not making the best use of it. They couldn’t. They had the money to buy hard drives for storage, but not the tools to get the most out of those hard drives, explained Cutting. Yahoo and Google wanted to capture Web pages and analyze them so people could search them—a valuable goal—but search became even more effective when companies such as Yahoo or LinkedIn or Facebook could see and store every click made on a Web page, to understand exactly what users were doing. Clicks could already be recorded, but until Hadoop came along no one besides Google could do much with the data. Imagine a place that is a cross between Wikipedia and Amazon—just for software: You go online to the GitHub library and pick out the software that you need right off the shelf—for, say, an inventory management system or a credit card processing system or a human resources management system or a video game engine or a drone-controlling system or a robotic management system. You then download it onto your company’s computer or your own, you adapt it for your specific needs, you or your software engineers improve it in some respects, and then you upload your improvements back into GitHub’s digital library so the next person can use this new, improved version. Now imagine that the best programmers in the world from everywhere—either working for companies or just looking for a little recognition—are all doing the same thing. You end up with a virtuous cycle for the rapid learning and improving of software programs that drives innovation faster and faster. The combination of that bubble and then its bursting—with the dot-com bust in the year 2000—dramatically brought down the price of voice and data connectivity and led, quite unexpectedly, to the wiring of the world to a greater degree than ever before. The price of bandwidth connectivity declined so much that suddenly a U.S. company could treat a company in Bangalore, India, as its back office, almost as if it were located in its back office. To put it another way, all of these breakthroughs around 2000 made connectivity fast, free, easy for you, and ubiquitous. Suddenly we could all touch people whom we could never touch before. And suddenly we could be touched by people who could never touch us before. I described that new sensation with these words: “The world is flat.” More people than ever could now compete, connect, and collaborate on more things for less money with greater ease and equality than ever before. The world as we knew it got reshaped. I think what happened in 2007—with the emergence of the supernova—was yet another huge leap upward onto a new platform. Only this move was biased toward easing complexity. When all the advances in hardware and software melded into the supernova, it vastly expanded the speed and scope at which data could be digitized and stored, the speed at which it could be analyzed and turned into knowledge, and how far and fast it could be distributed from the supernova to anyone, anywhere with a computer or mobile device. The result was that suddenly complexity became fast, free, easy for you, and invisible. ...more
Five important trends related to the distance products are shipped listed at the end of the "Peak Travel**spoiler alert** Useful notes from the book:
Five important trends related to the distance products are shipped listed at the end of the "Peak Travel" chapter: 1&2. Higher wages in China make it possible for it to be cost-effective for the U.S. to start making things itself again (with the help of technology). 3. Emergence of 3-d printing could completely eliminate the need to for around-the-world shipping of many products 4. Crowdsourcing apps such as ridesharing, etc. that could eliminate the need for car ownership without needing to completely rebuild infrastructure 5. Driverless cars - dedicates all of Ch12 to explaining the implications to a world full of driverless cars - virtually no accidents, no need for parking, highly efficient travel without traffic jams, no car ownership (compares the two models of the future, Google's model (computer only) vs. car-makers (supplementing human driving with AI help)).
Ch13 "The Next Door" is all about possible options for the future to address current transportation problems. Filled with several ideas about how to address some of the contradictions that currently exist within our transportation system - such as the desire to have products shipped right to our door, but hating to deal with delivery trucks on the road. Some ideas: -Creating more pedestrian friendly spaces to avoid forcing walkers/bikers in areas where they are openly competing with cars. e.g., The closing off of Times Square in NYC. -Make shorter-term transportation plans because technology and traveling behavior change so quickly (e.g., ride services and driverless cars) -Use tolls to regulate driving behavior - it would cost more to drive during peak hours (congestion pricing) -Use variable pricing for road use - trucks should cost more given the increased damage they do to the roads -Persuade businesses to time shift start and quit times for employees -Convert traffic lights to traffic circles - safer and faster because drivers don't have to stop completely (20% faster) -Mass transit should focus on existing infrastructure instead of building light rail systems from scratch (e.g., make buses a priority travel system - special faster lanes buses would be allowed to take - express transit lanes) -The last mile problem (getting from main roads to door-to-door) is a problem for public transportation, but ride-share services combined with shuttle or bus-share services might solve this problem in the future. -For goods shipping, use carpool lanes for exclusive truck use to reduce congestion -As far as global warming issues go, the inevitable solution is a switch to more renewable energy sources, but also reducing the number of miles traveled by our goods - currently the price of fossil fuels (on the environment especially) is subsidized by the government -Culturally, changing the lack of social norms for walking and biking could be hugely beneficial (statistics about American's walking: in 1980 5.6% commuted to work by foot, in 2012 2.8% - we walk about 2.5 miles a day, which is about half what a healthy person should cover, and is about half of what the average Australian or Swiss citizen does - Amish men (represent American 150 years ago) walk about 9 miles a day. Countries with higher daily steps have lower obesity rates.
Basic chapter summaries:
Ch1 - Morning Alarm - all the parts of his iPhone and how far they traveled Ch2 - The Ghost in the Can - the incredible uses of aluminum, how it is in everything now, and how it gets transported from beginning to end (most useful recycled product) Ch3 - Morning Brew - detailed look at how far his coffee travels to get to him Ch4 - Four airliners a Week - the danger of driving cars and how unfazed most people are by the level of risk (compared to if 4 airliners a week crashed) Ch5 - Friday the 13th - following how many fatal crashes happen in a single day and the psychology behind why drivers make mistakes Ch6 - Pizza, Ports and Valentines - introduction to the port system and how much coordination must take place to allow a place like Dominoes to deliver a pizza on time Ch7 - The Ladies of Logistics - the development of the busiest ports - LA and Long Beach and how they handled the logistics of shipping and the development of the infrastructure needed for it Ch8 - Angels Gate - detailed description of cargo traffic controllers who regulate order in which ships come into port and how backed up they get when problems occur Ch9 - The Ballet in Motion - details of how container ships work, pilots that bring them into harbor, etc. Ch10 - The Last Mile - description of UPS service and how it gets packages the last mile of shipping to customer's door - includes assessment of obstacles building up for future shipping needs Ch11 - Peak Travel - traffic engineer's assessment of future of transportation and how dire infrastructure problems are in U.S. if things continue in current direct Ch12 - Robots in Paradise - the curent status of driverless cars and what the future could look like Ch13 - The Next Door - ideas for solving transportation now and in the future
**spoiler alert** This was a great thought-provoking book. He comes at the "why" from a completely different angle than I would, but I greatly valued **spoiler alert** This was a great thought-provoking book. He comes at the "why" from a completely different angle than I would, but I greatly valued the thesis that we constantly think we know more than we actually know, and we should favor systems that handle uncertainty well.
My favorite notes and quotes from the book:
This section, Book II, deals with the fragility that comes from the denial of hormesis, the natural antifragility of organisms, and how we hurt systems with the very best of intentions by playing conductor. We are fragilizing social and economic systems by denying them stressors and randomness, putting them in the Procrustean bed of cushy and comfortable—but ultimately harmful—modernity.
Some people have fallen for the naive turkey-style belief that the world is getting safer and safer, and of course they naively attribute it to the holy “state” (though bottom-up Switzerland has about the lowest rate of violence of any place on the planet). It is exactly like saying that nuclear bombs are safer because they explode less often. The world is subjected to fewer and fewer acts of violence, while wars have the potential to be more criminal. We were very close to the mother of all catastrophes in the 1960s when the United States was about to pull the nuclear trigger on the Soviet Union. Very close. When we look at risks in Extremistan, we don’t look at evidence (evidence comes too late), we look at potential damage: never has the world been more prone to more damage; never.6 It is hard to explain to naive data-driven people that risk is in the future, not in the past. Yellow highlight | Location: 1,914
The point of the previous chapter was that the risk properties of the first brother (the fragile bank employee) are vastly different from those of the second one (the comparatively antifragile artisan taxi driver). Likewise, the risk characteristic of a centralized system is different from that of a messy municipally-led confederation. The second type is stable in the long run because of some dose of volatility. Variations also act as purges. Small forest fires periodically cleanse the system of the most flammable material, so this does not have the opportunity to accumulate. Systematically preventing forest fires from taking place “to be safe” makes the big one much worse. For similar reasons, stability is not good for the economy: firms become very weak during long periods of steady prosperity devoid of setbacks, and hidden vulnerabilities accumulate silently under the surface—so delaying crises is not a very good idea. Likewise, absence of fluctuations in the market causes hidden risks to accumulate with impunity. The longer one goes without a market trauma, the worse the damage when commotion occurs. The idea of injecting random noise into a system to improve its functioning has been applied across fields. By a mechanism called stochastic resonance, adding random noise to the background makes you hear the sounds (say, music) with more accuracy. To summarize, the problem with artificially suppressed volatility is not just that the system tends to become extremely fragile; it is that, at the same time, it exhibits no visible risks. Also remember that volatility is information. In fact, these systems tend to be too calm and exhibit minimal variability as silent risks accumulate beneath the surface. Although the stated intention of political leaders and economic policy makers is to stabilize the system by inhibiting fluctuations, the result tends to be the opposite. These artificially constrained systems become prone to Black Swans. Modernity is not just the postmedieval, postagrarian, and postfeudal historical period as defined in sociology textbooks. It is rather the spirit of an age marked by rationalization (naive rationalism), the idea that society is understandable, hence must be designed, by humans. With it was born statistical theory, hence the beastly bell curve. So was linear science. So was the notion of “efficiency”—or optimization. Indeed, in the past, when we were not fully aware of antifragility and self-organization and spontaneous healing, we managed to respect these properties by constructing beliefs that served the purpose of managing and surviving uncertainty. We imparted improvements to agency of god(s). We may have denied that things can take care of themselves without some agency. But it was the gods that were the agents, not Harvard-educated captains of the ship. So the emergence of the nation-state falls squarely into this pro-gression—the transfer of agency to mere humans. The story of the nation-state is that of the concentration and magnification of human errors. Modernity starts with the state monopoly on violence, and ends with the state’s monopoly on fiscal irresponsibility. Yellow highlight | Location: 2,079
We will discuss next two central elements at the core of modernity. Primo, in Chapter 7, naive interventionism, with the costs associated with fixing things that one should leave alone. Secundo, in Chapter 8 and as a transition to Book III, this idea of replacing God and the gods running future events with something even more religiously fundamentalist: the unconditional belief in the idea of scientific prediction regardless of the domain, the aim to squeeze the future into numerical reductions whether reliable or unreliable. For we have managed to transfer religious belief into gullibility for whatever can masquerade as science. For a theory is a very dangerous thing to have. And of course one can rigorously do science without it. What scientists call phenomenology is the observation of an empirical regularity without a visible theory for it. In the Triad, I put theories in the fragile category, phenomenology in the robust one. Theories are superfragile; they come and go, then come and go, then come and go again; phenomenologies stay, and I can’t believe people don’t realize that phenomenology is “robust” and usable, and theories, while overhyped, are unreliable for decision making—outside physics. Yellow highlight | Location: 2,214
An ethical problem arises when someone is put in charge. Greenspan’s actions were harmful, but even if he knew that, it would have taken a bit of heroic courage to justify inaction in a democracy where the incentive is to always promise a better outcome than the other guy, regardless of the actual, delayed cost. Yellow highlight | Location: 2,236
Here, all I am saying is that we need to avoid being blind to the natural antifragility of systems, their ability to take care of themselves, and fight our tendency to harm and fragilize them by not giving them a chance to do so. What should we control? As a rule, intervening to limit size (of companies, airports, or sources of pollution), concentration, and speed are beneficial in reducing Black Swan risks. Let me simplify my take on intervention. To me it is mostly about having a systematic protocol to determine when to intervene and when to leave systems alone. Few understand that procrastination is our natural defense, letting things take care of themselves and exercise their antifragility; it results from some ecological or naturalistic wisdom, and is not always bad—at an existential level, it is my body rebelling against its entrapment. Psychologists and economists who study “irrationality” do not realize that humans may have an instinct to procrastinate only when no life is in danger. I do not procrastinate when I see a lion entering my bedroom or fire in my neighbor’s library. I do not procrastinate after a severe injury. I do so with unnatural duties and procedures. Since procrastination is a message from our natural willpower via low motivation, the cure is changing the environment, or one’s profession, by selecting one in which one does not have to fight one’s impulses. Few can grasp the logical consequence that, instead, one should lead a life in which procrastination is good, as a naturalistic-risk-based form of decision making. Yellow highlight | Location: 2,340
noise and signal. Noise is what you are supposed to ignore, signal what you need to heed. Indeed, we have loosely mentioned “noise” earlier in the book; time to be precise about it. In science, noise is a generalization beyond the actual sound to describe random information that is totally useless for any purpose, and that you need to clean up to make sense of what you are listening to. Consider, for example, elements in an encrypted message that have absolutely no meaning, just randomized letters to confuse the spies, or the hiss you hear on a telephone line that you try to ignore in order to focus on the voice of your interlocutor. And this personal or intellectual inability to distinguish noise from signal is behind overintervention. The previous two chapters showed how you can use and take advantage of noise and randomness; but noise and randomness can also use and take advantage of you, particularly when totally unnatural, as with the data you get on the Web or through the media. The more frequently you look at data, the more noise you are disproportionally likely to get (rather than the valuable part, called the signal); hence the higher the noise-to-signal ratio. And there is a confusion which is not psychological at all, but inherent in the data itself. Say you look at information on a yearly basis, for stock prices, or the fertilizer sales of your father-in-law’s factory, or inflation numbers in Vladivostok. Assume further that for what you are observing, at a yearly frequency, the ratio of signal to noise is about one to one (half noise, half signal)—this means that about half the changes are real improvements or degradations, the other half come from randomness. This ratio is what you get from yearly observations. But if you look at the very same data on a daily basis, the composition would change to 95 percent noise, 5 percent signal. And if you observe data on an hourly basis, as people immersed in the news and market price variations do, the split becomes 99.5 percent noise to 0.5 percent signal. That is two hundred times more noise than signal—which is why anyone who listens to news (except when very, very significant events take place) is one step below sucker. Yellow highlight | Location: 2,396
To conclude, the best way to mitigate interventionism is to ration the supply of information, as naturalistically as possible. This is hard to accept in the age of the Internet. It has been very hard for me to explain that the more data you get, the less you know what’s going on, and the more iatrogenics you will cause. People are still under the illusion that “science” means more data. Yellow highlight | Location: 2,512
The idea of proposing the Triad was born there and then as an answer to my frustration: Fragility-Robustness-Antifragility as a replacement for predictive methods.
Let me stop to issue rules based on the chapter so far. (i) Look for optionality; in fact, rank things according to optionality, (ii) preferably with open-ended, not closed-ended, payoffs; (iii) Do not invest in business plans but in people, so look for someone capable of changing six of seven times over his career, or more (an idea that is part of the modus operandi of the venture capitalists Marc Andreessen); one get immunity from the backfit narratives of the business plan by investing in people. It is simply more robust to do so; (iv) Make sure you are barbelled, what-ever that means in your business. (Notes: optionality is about leaving yourself choices, barbelled means to have lots of small amount of risk, which allows you to take a big - "big risk" investment at the same time (like having a government job to support your writing career).
On p. 244 he talks about being "an intelligent anti-student" or "autodidact", as opposed to the "swallowers" whose learning was defined by a curriculum. "Again, I wasn't exactly an autodidact, since I did get degrees; I was rather a barbell autodidact as I studied the exact minimum necessary to pass any exam, overshooting accidentally once in a while, and only getting in trouble a few times by undershooting. But I read voraciously, wholesale, initially in the humanities, later in the mathematics and science, and now in history - outside a curriculum, away from the gym machine so to speak. I figured out that whatever I selected myself I could read with more depth and more breadth - there was a match to my curiosity. And I could take advantage of what people later pathologized as Attention Deficit Hyperactive Disorder (ADHD) by using natural stimulation as a main driver to scholarship. The enterprise needed to b e totally effortless in order to be worthwhile. The minute I was bored with a book or a subject I moved to another one, instead of giving up on reading altogether - when you are limited to the school material and you get bored, you have a tendency to give up and do nothing or play hooky out of discouragement. The trick is to be bored with a specific book, rather than with the act of reading. So the number of pages absorbed could grow faster than otherwise. And you find gold, so to speak, effortlessly, just as in rational but undirected trial-and-error-based research. It is exactly like options, trial and error, not getting stuck, bifurcating when necessary but keeping a sense of broad freedom and opportunism. Trial and error is freedom.
On p. 297-298 he gives several examples of convex (antifragile) and concave (fragile) functions. These are non-linear functions that describe many systems in the real world. "Someone with a linear payoff needs to be right more than 50 percent of the time. Someone with a convex payoff, much less. The hidden benefit of antifragility is that you can guess worse than random and still end up outperforming. Here likes the power of optionality - your function of something is very convex, so you can be wrong and still do fine - the more uncertainty, the better." - "The hidden harm of fragility is that you need to be much, much better than random in your prediction and knowing where you are going , just to offset the negative effect." -so in other words, we constantly overestimate our ability to predict complex systems and thus create lots of fragile systems that will inevitably break
On p. 303 he talks about the importance of inaction in many situations: "I have used all my life a wonderfully simple heuristic: charlatans are recognizable in that they will giv eyou positive advice, and only positive advice, exploiting our gullibility and sucker-proneness for recipes that hit you in a flash as just obvious, then evaporate later as you forget them." ... "Yet in practice it is the negative that's used by the pros, those selected by evolution: chess grandmasters usually win by not losing; people become rich by not going bust..."
On p. 304 he continues by connecting to Karl Popper's explanation of science: "Let us say that, in general, failure (and disconfirmation) are more informative than success and confirmation, which is why I claim that negative knowledge is just "more robust."
On p.308 he cites Dan Simon's gorilla study and how important it is to focus on "less is more" or only the most important: "I discovered that I had been intuitively using the less is more idea as an aid in decision making (contrary to the method of putting a series of pros and cons side by side on a computer screen). For instance, if you have more than one reason to do something (choose a doctor or veterinarian, hire a gardener or an employee, marry a person, go on a trip), just don't do it. It does not mean that one reason is better than two, just that by invoking more than one reason you are trying to convince yourself to do something. Obvious decisions (robust to error) require no more than a single reason."
On p. 348 he discusses the dangers of trying to mess with overly complex systems that we have very little understanding of: "Our record of understanding risks in complex systems (biology, economics, climate) has been pitiful, marred with retrospective distortions (we only understand the risks after the damage takes place, yet we keep making the mistake), and there is nothing to convince me that we have gotten better at risk management. In this particular case, because of the scalability of the errors, you are exposed to the wildest possible form of randomness. Simply, humans should not be given explosive toys (like atomic bombs, financial derivatives, or tools to create life)." ...more
**spoiler alert** -Basic thesis of book is that it is much more difficult to go from 0 to 1 than from 1 to 2. Starting with a truly new idea is much m**spoiler alert** -Basic thesis of book is that it is much more difficult to go from 0 to 1 than from 1 to 2. Starting with a truly new idea is much more impactful than an incremental improvement of an old idea.
Chapter 1: The difference between horizontal progress/globalization and vertical progress/technological innovation (doing new things). The latter is needed to solve problems challenging our future.
Chapter 2: The reason the 90's tech bubble happened and which companies survived it and why.
Chapter 3: Innovation and unique technology that the market demands give you profitable monopoly power. Usually economists talk about some extent of monopoly power (e.g. patents) encourages innovation, but Thiel was looking at it from another perspective: that of an entrepreneur choosing which type of business to start. His answer is to choose one that makes differentiated products (that the market demands) and gives you monopoly power.
Chapter 4: Again, competition vs. monopoly. Traditional businesses provide similar products and compete by cutting prices/costs, advertising etc.; innovative businesses make products that no one else makes. He argues that going into a business that isn't a monopoly (a competitive market) is a recipe for disaster (and bad capitalism).
Chapter 5: What are the features of a business that can create and sustain monopoly power, growth and cash flow? 1. Unique technology (at least 10 times better than the existing alternative--otherwise it won't be noticed--or entirely new products). 2. Network externality - business ideas that will only work at scale (even though you have to start small). 3. Economies of scale - unlimited growth not constrained by physical overhead, personnel, etc. 4. Brand. How to create one? Start by monopolizing a small market (that needs to actually exist, unlike the market of British food in Palo Alto) through winning the most important group of users in this market, then expand in size or variety (i.e. into related markets). Don't focus on DISRUPTION; make the pie bigger instead of playing a zero sum game.
Chapter 6: Have a purpose, a vision and long-run planning, instead of a lean startup and minimum viable product to be driven by whatever comes up on the way. This is supposed to represent the idea of "definite optimism" - have specific plans and think big, rather than having vague plans that won't take you anywhere in particular.
Chapter 7: Power law. As a venture capitalist, don't simply pursue diversification; choose a few businesses to invest in and choose them carefully so all of them have great (expected) potential.
Chapter 8: All great business have "secrets". A world without secrets is boring, stagnated and has no room to improve upon. Our world is full of injustice and inefficiency, so it cannot be one without secrets. What prevent us from exploring these secrets are gradualism, risk aversion, inertia and belief in equilibrium ("flatness" or perfect efficiency of markets). Only those who see secrets can grasp hidden opportunities, lead to Scientific Revolutions and found businesses like Airbnb, Uber, Lyft etc. How to find secrets? Look where no one else does; choose the path less traveled. That's why it's (kinda) important to have contrarian views based on your own independent thinking.
Chapter 9: According to power law, there are a few things that are crucial to the entire business, e.g. its foundations. Make sure you choose founding partners who are really passionate about the business and you enjoy working together. (Stuff on the size of the board, stock as incentives etc.) You are not only trying to create new things at the founding stage of a startup; if you are successful, you should have created a business that stays creative.
Chapter 10: The company doesn't attract employees (or create a "culture") by providing benefits like free food, free laundry etc. It should attract employees by what it does and who the team are. A company should be its own culture. It should be like a cult (rather than a consulting firm with no loyalty or identity), but one that is not extreme. (In Chapter 8, the HP example shows that once a company is managed in the conservative "MBA" way to optimize for bureaucratic functioning, innovation dies.)
Chapter 11: Marketing is important. Marketing influences everyone, especially those who think they are not influenced. The best marketer doesn't look like one. Different ways of marketing, from viral marketing to complex sales. Make sure you have a viable marketing strategy for your product. It's great if your customers can market for you, e.g. through network externality (like PayPal).
Chapter 12: Humans and computers should be complements (e.g. PayPal fraud detection, Palantir helping intelligence experts) rather than substitutes. (Great idea, and makes sense in some way--there are things that one is good at while the other is not--but in practice they are also substitutes in many ways which have unfortunate implications for employment.)
Chapter 13: The failure of most clean technology firms was a business rather than political one. How Tesla succeeded while many others failed on 7 dimensions: engineering (breakthrough technology), timing, monopoly (start with big share of small market), personnel, marketing (have a way to deliver your product), sustainability (will your market position be defensible in 10/20 years?), secrets (have you identified unique opportunity that others don't see?). He also covers the fallacy of social entrepreneurship.
Chapter 14: Many (tech) entrepreneurs are "weird". Innovative tech companies are usually authoritarian with charismatic leaders like Steve Jobs. Society should be more tolerant of seemingly weird or extreme entrepreneurs, because we need extraordinary people to lead companies in order to avoid the slow progress of gradualism. However, GLADLY, Thiel also advises such leaders to remain cautious, not to over estimate their power or become "prime movers" of Ayn Rand who do not realize that their success relies on other people....more