DECEMBER 31, 2024
Imagine you’re tracking the temperature of your city every day for a year. You have a long list of numbers representing daily temperatures—this is your time series. Naturally, you might wonder: what will the temperature be tomorrow or next week? Most people would instinctively think of using a forecasting method to predict future temperatures based on past trends. This curiosity and need to predict the future form the essence of modeling.
The advent of AI and powerful computational methods has revolutionized how we approach such questions. Today, modeling is often synonymous with leveraging numerical techniques to understand and predict complex phenomena. Yet, to truly harness its power, we must understand its foundational principles, which can be conceptualized in a three-step framework:
At its core, a model is a simplified representation of reality, constructed through observation, experimentation, and logical reasoning. Throughout history, humans have built intuitive models based on patterns they observed. For instance, ancient people noticed that pressing against an object made it move, and pressing harder made it move faster. This intuitive understanding laid the groundwork for the formalized model of motion.
The essence of modeling at this stage is to capture the essential features of a phenomenon without overwhelming complexity. A model’s value lies in its ability to approximate reality sufficiently to explain, predict, or manipulate it.
Mathematics serves as the language that translates these conceptual models into precise and unambiguous terms. Figures like Isaac Newton epitomized this transition by formalizing intuitive models into mathematical laws. For instance, Newton’s Laws of Motion distilled centuries of observations into equations describing the relationship between force, motion, and acceleration.
Mathematics allows models to be generalized, rigorously tested, and applied across diverse contexts. It transforms intuitive ideas into frameworks that can be communicated and analyzed, ensuring clarity and repeatability.
Computational methods act as tools for solving complex mathematical problems that are infeasible with pen and paper. For instance, designing a bridge or an airplane requires engineers to ensure the structure can handle forces like wind, weight, and vibrations. The underlying math, based on Newton’s laws, is often too intricate for manual calculation. Computers enable approximations of these equations, allowing engineers to simulate real-world scenarios and optimize designs.
While computational methods introduce approximations, they provide solutions precise enough to make critical decisions, from ensuring structural safety to optimizing performance.
Building on the three-step process of modeling—approximating reality, formalizing through mathematics, and leveraging computational methods—it becomes evident how critical each step is for creating reliable models. When one of these steps is skipped or inadequately addressed, the resulting model can lead to flawed predictions or catastrophic consequences. Let’s explore two examples that illustrate what happens when one or more steps in the modeling process are neglected.
The 2008 Financial Crisis
The 2008 financial crisis exemplifies the dangers of flawed models. Financial institutions relied heavily on mathematical models to assess the risk of derivatives like mortgage-backed securities (MBS) and collateralized debt obligations (CDOs). These models assumed housing prices would always rise and that mortgage defaults were isolated events. However, they ignored real-world factors, such as risky lending practices and systemic risks from correlated defaults.
When housing prices fell, the underlying assumptions collapsed, triggering a global financial meltdown. In the aftermath, the “Financial Modelers' Manifesto” by Emanuel Derman and Paul Wilmott called for greater responsibility in financial modeling, emphasizing that models are simplifications of reality and must be applied cautiously.
This failure highlights a breakdown at two levels of the modeling process. At the foundational level, the models failed to capture the complexity of the real estate market and the systemic interconnections within financial systems. Mathematically, the assumptions and simplifications used to design these models did not accurately represent the real-world dynamics. Computationally, while simulations and risk analyses were conducted, they were only as good as the flawed assumptions driving them.
In this example, the pitfalls of faulty models can be traced back to gaps or missteps in the three-step modeling process. A lack of accurate observation and conceptualization, errors or oversimplifications in mathematical formalization can lead to significant real-world consequences.
By revisiting the foundational principles of modeling—approximating reality, formalizing through mathematics, and leveraging computational tools—we can better understand the strengths and weaknesses of models. These principles remind us that models are tools to aid decision-making, not infallible predictors, and that their reliability depends on critical evaluation and continuous refinement.
Modeling as a discipline has evolved alongside advances in tools and knowledge, transitioning through three key phases:
1. Empirical Observations: Foundations of Modeling
In ancient times, thinkers like Pythagoras and Aryabhatta relied on empirical observations to develop foundational models. Pythagoras, for example, observed geometric relationships, leading to the Pythagorean theorem. Aryabhatta used basic tools to model celestial movements. These early efforts, though imprecise by modern standards, laid the groundwork for future advancements.
2. Mathematical Methods: Precision and Formalization
The development of advanced mathematical techniques marked a significant leap. Figures like Newton and Euler formalized observations into predictive mathematical laws. Newton’s Laws of Motion and Euler’s equations exemplify this era, showcasing how mathematics deepened our understanding and predictive power.
3. Computational Methods: Modern Problem-Solving Tools
The rise of computers in the 20th century revolutionized modeling. Complex problems, such as simulating airflow over an airplane wing or predicting climate change, became tractable through computational methods. These tools enable scientists and engineers to approximate solutions for equations too intricate to solve manually, driving innovation across fields.
Scientific methodology evolves with the tools and understanding of the era. The progression from empirical observations to mathematical formalization and computational methods doesn’t invalidate earlier approaches. Instead, it builds upon them, creating a layered methodology:
Observations remain the foundation.
Mathematics provides structure.
Computational methods enable practical applications.
Each step enhances the modeling process, reflecting a continuum of discovery and innovation.
Modeling is more than just applying computational power; it is a scientific methodology rooted in observation, abstraction, and computation. However, models can only be as reliable as their assumptions. Examples like the 2008 financial crisis highlight the risks of over-reliance on incomplete or overly simplistic frameworks.
Ultimately, modeling is about bridging our understanding of complex phenomena with tools that evolve over time. It’s not about replacing earlier methods but enhancing them, ensuring that each layer builds upon the insights of the past. While modern AI models and computational methods expand our capabilities, the foundational principles of modeling remain grounded in the observation of reality and its mathematical expression.
Understanding this three-step process underscores the interplay between observation, abstraction, and computation. Each step builds upon the tools and insights of the previous one, ensuring that modeling continues to serve humanity responsibly and effectively.
Let’s revisit the temperature example introduced at the beginning. Tracking daily temperatures over a year gives us a time series—a sequence of observations indexed by time. Applying the three-step modeling process can help us analyze and forecast future temperatures effectively:
Approximation of Reality: Begin by observing and understanding the underlying patterns in the data. For instance, temperatures often follow seasonal trends, with higher averages in summer and lower in winter. This step involves identifying these patterns and any anomalies, such as unusually high or low temperatures, which could point to broader phenomena like heatwaves or cold snaps.
Mathematical Representation: Using the observed patterns, formulate a mathematical model to capture the relationships. For example, you might represent the temperature as a combination of a seasonal component (e.g., sine waves to model yearly cycles) and a noise component (random variations). Techniques such as Fourier analysis or decomposition methods can formalize these patterns into mathematical expressions.
Computational Methods: Finally, use computational tools to analyze the data and make predictions. For instance, machine learning models like ARIMA (AutoRegressive Integrated Moving Average) or neural networks can be applied to predict future temperatures based on past data. Simulations can also test how the model performs under different scenarios, such as unusual weather conditions.
By following this process, the temperature example transitions from raw data to a robust predictive framework. The observations guide the conceptual model, mathematics refines it into a structured form, and computational methods enable precise, actionable insights.
This application demonstrates how the three-step process bridges simple observations with sophisticated tools to solve real-world problems, reaffirming the universal value of modeling principles.
This blog is technically loaded than usual, however considering the power of computational methods and computers, somewhere underlying scientific thinking is getting lost I felt. Thanks to a friend of mine for reading the article and providing feedback to add the last section where the three principles are tied back to the temperature related example I gave at the start.
OCTOBER 25, 2024
Technology is evolving at a breakneck pace today, impacting nearly every aspect of our lives. But how should we think about this constant state of change? One challenge when thinking about present events is that they can seem extraordinarily unique, as though nothing like them has ever occurred before. As the metaphor goes, "A small pebble kept close to the eye can seem larger than the sun." The present often feels magnified, eclipsing the broader history of technological advancement. However, when we step back and take a long view, we find that rapid technological progress is not unique to our time. It is part of a continuous process that has been shaping humanity for millennia.
I have always believed that studying history offers valuable insights into how we should think about the present and future. Sir Isaac Newton beautifully encapsulated this idea when he wrote to Robert Hooke, "If I have seen further, it is by standing on the shoulders of giants." Newton’s words remind us that progress is cumulative, building on the innovations and discoveries of those who came before us. History teaches us not just what technologies were developed, but also how societies adapted to and were transformed by them.
Another critical benefit of studying both the past and present of technology is that it gives us a clearer idea of how to contribute to shaping the future. By identifying emerging trends and understanding the transformative potential of new technologies, we can position ourselves to be active participants in the ongoing technological revolution.
Let’s take a step back and look at the various epochs of technology that have shaped human civilization:
Each of these epochs introduced groundbreaking technologies that fundamentally transformed human life. What’s fascinating is that many of the technologies that we now take for granted—such as electricity, the internet, or even basic tools like iron and steel—were once seen as luxurious innovations. As Andrew Carnegie once said, "Capitalism is about turning luxuries into necessities." The technologies that now form the foundation of modern life didn’t become indispensable overnight. It was the vision of entrepreneurs who saw the transformative power of these innovations and worked tirelessly to bring them into everyday use. Let’s highlight the companies in the modern age that played a pivotal role in making these technologies widely accessible:
These companies were built around core technologies that had applications across a variety of fields, enabling them to become indispensable for widespread innovation and accessibility. By establishing central points of production and distribution, these businesses not only expanded the reach of transformative technologies but also pioneered entirely new business models that democratized access. This allowed entrepreneurs and companies of all sizes to leverage powerful technologies without needing to build or maintain them from scratch.
For instance, centralized electricity generation and distribution, spearheaded by General Electric, made reliable power accessible to factories, businesses, and homes. In the digital age, IBM and Microsoft were foundational in making computers and software commercially viable, setting the stage for personal computing. More recently, Amazon Web Services (AWS) pioneers the model of businesses accessing computational power and storage on demand, freeing them from the need to manage their own servers. Taiwan Semiconductor Manufacturing Company (TSMC) played a similar role for microchips, centralizing semiconductor manufacturing and making advanced chips more accessible for everything from smartphones to high-performance computing. YouTube revolutionized media distribution by providing a platform for user-generated content, while the Apple App Store centralized software distribution for mobile devices, enabling millions of developers to reach users effortlessly.
This pattern of centralization and distribution can be explained by the basic economic forces of supply and demand. As demand for a technology increases, it becomes more cost-effective for companies to centralize production and distribution, allowing economies of scale to drive down costs and make the technology more accessible.
Another way technology evolves is through optimization—making existing tools more efficient, powerful, and accessible. This often leads to unexpected breakthroughs. In the early days of computing, Thomas J. Watson, then president of IBM, famously predicted, "I think there is a world market for maybe five computers." This prediction was based on the limited computing power and utility of early machines. Today, the computing power in a smartphone vastly surpasses that of the most powerful computers from the 1990s. For example, an iPhone 12 has approximately 100,000 times the processing power of the NASA computer that landed Apollo 11 on the moon. Another example of optimization can be seen in the progression of energy efficiency in automobiles or the rise of electric vehicles. Tesla, for instance, has redefined what efficient, powerful, and sustainable transport can look like.
In conclusion, the evolution of technology can be thought of as a carriage pulled by two horses: commoditization and optimization. Commoditization is the process of turning new technologies into everyday necessities, driven by entrepreneurial vision and economic forces. Optimization, on the other hand, focuses on making these technologies faster, smaller, more efficient, and more powerful over time. Together, these two forces ensure that technology not only continues to evolve but also becomes more accessible and beneficial to society. In next blog, I will explore how this perspective might apply to artificial intelligence, and if AI evolves in line with these patterns, what new possibilities and innovations might arise?
NORTH STARS FOR THE INFORMATION AGE
OCTOBER 5, 2024
The education system today operates as a structured journey that takes students from general knowledge to deep specialization. It begins with schooling, where students are exposed to a broad spectrum of subjects—language, mathematics, social studies, science, and the arts. This phase of education is foundational, laying the groundwork for critical thinking and intellectual development.
As students move into pre-university education, their focus narrows slightly, often by choosing between streams like science, commerce, and humanities. Yet, the emphasis remains on a well-rounded understanding of the world. Subjects still span diverse fields to ensure that students develop a broad perspective before committing to any particular specialization.
At the graduate level, this is where the funnel begins to taper. Specialization starts to take center stage, but even here, there is a conscious effort to ensure students have a broad understanding of their field. For instance, an engineering student, even if intending to specialize in electronics, is typically exposed to mechanical, civil, and electrical engineering in their first year. This breadth is intentional. It serves to equip students with a more holistic view of the interconnectedness of their discipline.
Post-graduate education and doctorates represent the peak of specialization, where individuals dive deep into niche areas of knowledge, often contributing original research. However, as the system advances through these stages, it becomes clear that the ultimate aim of education is not to produce experts in one narrow field. Instead, it seeks to cultivate individuals who can think critically, solve problems, and innovate regardless of the field they choose to work in.
But why is the system designed this way? The rationale behind it is simple: the creators of modern educational systems recognized the potential dangers of hyper-specialization. Focusing too narrowly on one subject can lead to myopic thinking. When we lack exposure to other disciplines, we risk missing the broader context in which our work exists. The early stages of education, therefore, are meant to counterbalance this by providing a wider lens through which students can view the world. It is through this multi-disciplinary exposure that students learn how to think, rather than merely what to think.
In today's world, however, many view education as an investment—an illiquid asset, so to speak. In financial terms, an illiquid asset is something that cannot be easily converted into cash without a significant loss in value. Education works much the same way. Years of effort, time, and money are poured into acquiring knowledge, with no immediate guarantee of a return. In countries like ours, where many families face financial constraints, this perspective is particularly common. People prefer to invest in educational specializations that are expected to yield high returns in the future—fields with significant demand in the job market. It’s a reasonable strategy in low- and medium-income economies, where education is seen as one of the few ways to achieve upward social mobility.
However, if we extend this investment analogy, another principle comes into play: diversification. Just as investors diversify their portfolios to spread risk and increase the likelihood of a return, it is prudent to diversify one’s intellectual investments. Specializing too early or too narrowly is risky. Times of economic uncertainty remind us that having a broader understanding across multiple fields is like holding intellectual “cash”—a safe asset that allows for flexibility and adaptability when conditions change. In a world that is rapidly evolving, having a wide range of skills and knowledge can buffer against the volatility of the job market and technological change.
This brings us to the concept of the polymath. Historically, polymaths have been those individuals who possess broad expertise across multiple domains. Their insatiable curiosity fuels a lifelong pursuit of knowledge, and their value lies not just in their depth of expertise in one field, but in their ability to draw connections between seemingly disparate subjects. The “cash” in this metaphor is that curiosity—the intellectual liquidity that allows them to continuously learn, pivot, and adapt. This, in many ways, is the whole point of education: to foster a love of learning and the ability to acquire new skills, rather than to simply impart a narrow set of specialized knowledge.
While the idea of the polymath is not new, it is increasingly relevant in the Information Age. Consider figures like Leonardo da Vinci, Benjamin Franklin, and Marie Curie. These polymaths contributed across multiple fields, from science to art to politics. They were not just experts in their chosen areas, but also possessed a broad, interdisciplinary understanding of the world. Below is a table of ten famous polymaths, highlighting their diverse areas of expertise:
These individuals have left an indelible mark on history, and their work across multiple fields made them invaluable contributors to human progress. Their ability to think broadly while mastering specific areas gave them the tools to solve complex problems.
In conclusion, as we move further into the Information Age, where the rate of change in technology and knowledge is faster than ever before, polymath will act as north star to find our bearing. While specialization is necessary, it is equally important to cultivate a broad understanding of various fields. Polymaths, with their interdisciplinary knowledge and adaptability, are uniquely positioned to thrive in this new world. As the half-life of knowledge continues to shrink, the ability to learn how to learn, to remain intellectually curious, and to connect ideas across domains will be the most valuable asset anyone can possess.
While this blog has largely emphasized the advantages of being a polymath, it is important to recognize the invaluable contributions of specialists as well. Specialists dive deep into specific fields, pushing the boundaries of knowledge and innovation in ways that often require intense focus and expertise. In medicine, for example, specialists are crucial for addressing complex health issues that require in-depth understanding and precision. In engineering, advanced research and technological breakthroughs often come from those who have dedicated themselves to a single domain. Polymaths and specialists complement each other—one providing breadth, the other depth—and together they help society progress. Both approaches to learning and expertise have their own place, and each is essential in a world where complex problems often require a combination of wide-ranging knowledge and deep specialization.