Founder & CEO, GeoLiquefy.
Founder & Curator, IndianExceptionalism.org.
Tech & Strategy, Fortress India.
Emergent Ventures Fellow.
Previously Data Scientist, building factory-scale prediction systems for Unilever.
Things I'm involved with and areas of interest
Infrastructure resilience. Reducing hidden failure in physical systems.
Institutions and culture. Foundations of long-term national growth.
Geography and statecraft. Terrain, ecology, and strategic continuity.
Interpretable AI. Systems that must be understood, not just optimized.
Heart disease is the major cause of non-communicable and silent death worldwide. Heart diseases or cardiovascular diseases are classified into four types: coronary heart disease, heart failure, congenital heart disease, and cardiomyopathy. It is vital to diagnose heart disease early and accurately in order to avoid further injury and save patients' lives. As a result, we need a system that can predict cardiovascular disease before it becomes a critical situation. Machine learning has piqued the interest of researchers in the field of medical sciences. For heart disease prediction, researchers implement a variety of machine learning methods and approaches. In this work, to the best of our knowledge, we have used the dataset from IEEE Data Port which is one of the online available largest datasets for cardiovascular diseases individuals. The dataset isa combination of Hungarian, Cleveland, Long Beach VA, Switzerland & Statlog datasets with important features such as Maximum Heart Rate Achieved, Serum Cholesterol, Chest Pain Type, Fasting blood sugar, and so on. To assess the efficacy and strength of the developed model, several performance measures are used, such as ROC, AUC curve, specificity, F1-score, sensitivity, MCC, and accuracy. In this study, we have proposed a framework with a stacked ensemble classifier using several machine learning algorithms including ExtraTrees Classifier, Random Forest, XGBoost, and so on. Our proposed framework attained an accuracy of 92.34% which is higher than the existing literature.
Artificial intelligence as a tool has been used in various disciplines. This chapter discusses the application of human–computer interaction (HCI) and artificial intelligence (AI) in psychology. Moreover, the chapter discusses how new technologies are assisting the discipline of psychology and how it creates cyberspace that affects behavior, develops negative relationships, and promotes fear of missing out, among other things. Also, it covers AI and Human rights, i.e., how AI affects humans and vice versa. Further is discussed how AI and HCI can be coupled together and make better user-centered designs using rationalistic, design, and cognitive engineering approaches. To produce better user-friendly products, the chapter also focuses on usability engineering and verification discipline dealing with human computer interfaces and AI including psychology, human factors, and cognitive science. Additionally, it is discussed how the Google homepage has varied over time, with minor tweaks to make it more interactive. In the last section, the whole chapter is summarized, and the future scope of democratization and decentralization of information technology is discussed. Moreover, we have also addressed how social media including Web 2.0 shapes our behavior and outlook of how we perceive the world in general.
Uncertain factors such as the amount of compressive stress, the amount of shear stress, and the geometry of the beam significantly changes the struc- tural behaviour of a reinforced concrete (RC) deep beam. Predicting which is a complex a task. Deep learning models and machine learning models are be- ing attempted by various researchers to solve this problem. In this study, the authors have used boosting machine learning algorithms, namely Adaptive Boosting, Extreme Gradient Boosting, Random Forest, Gradient Boosting, and Voting Regressor, to forecast RC deep beam shear strength. The per- formance of these models was evaluated using metrics such as the coefficient of determination R2, mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE). The authors have also optimized the ensemble learning models using custom hyperparameters. Among the five models, the XGBoost model showed the highest accuracy while predicting value of R2 as 0.99 and the least model error with MAE as 2.47 and RMSE as 1.45.A comparison between ensemble learning models with mechanics-driven models from the United States, China, Europe, British (CIRIA) and Canada country codes was drawn which reflected that the ensemble learning models were superior in this regard.
This study addresses the challenge of enhancing pavement condition assessment methodologies by proposing a universal XGBoost-SHAP framework. Leveraging diverse numerical input variables, including cracking, plasticity index, maximum dry density, California bearing ratio, soil type, and layer thickness, this framework aims to derive pivotal pavement condition parameters efficiently. The research demonstrates the framework’s efficacy in facilitating data-driven decision-making, offering a cost-effective alternative to traditional falling weight deflectometer (FWD) testing. Notably, the study utilizes a dataset of 2001 instances from the non- core road network of Andhra Pradesh State for model training and validation. Results reveal the clear advantages of the XGBoost-SHAP model over conventional FWD approach, particularly in terms of cost- efficiency, transparency, and precision. Detailed analysis employing Shapley Additive Explanations (SHAP) identifies cracking percentage as a key predictor for surface condition parameters, while California Bearing Ratio (CBR) emerges as crucial for deflection ratio prediction, highlighting the model’s predictive power and transparency. Among all the ensemble approaches including Random Forest, XGBoost, Light GBM, and other ML algorithms, XGBoost exhibits the highest R2, the lowest MSE and MAE, and an extremely low MAPE, demonstrating its superior prediction accuracy. Overall, this research introduces a promising avenue for advancing pavement condition assessment, offering an economically viable, data-centric solution characterized by heightened accuracy and transparency. By bridging the gap between traditional methodologies and advanced machine learning techniques, the proposed framework holds promise for revolutionizing pavement management practices.
Antifragility, a concept pioneered by Nassim Nicholas Taleb, has undergone significant development over the past decades. In his work, Taleb describes antifragility as the opposite of fragility—a system that not only withstands stress and volatility but actually thrives and improves as a result. However, many existing systems, including those in Information Technology (IT) and complex economic models, tend to fail under stress. This paper aims to explore the potential implementation of antifragility principles into software architecture. By embracing the concept of an- tifragility, software systems could be designed to not only withstand stressors but also harness them to enhance their robustness and performance. The authors recognize that traditional approaches to software architecture often focus on minimizing failure points and ensuring stability. While these approaches are valuable, they often neglect the dynamic nature of real-world systems and fail to adapt to unforeseen challenges. The paper proposes an alternative perspective that considers stress as an opportunity for improvement. By introducing antifragile elements into software architec- ture, such as decentralized decision-making, self-healing mechanisms, and adaptive resource allocation, the authors argue that software systems can become more resilient, responsive, and capable of capitalizing on stress-induced dis- ruptions. To validate their ideas, the authors present case studies and practical examples of how antifragile software architectures could operate in various domains. They also discuss potential challenges and trade-offs associated with implementing antifragility, such as increased complexity and resource requirements. By shedding light on the possi- bility of embracing antifragility in software architecture, this paper seeks to inspire further research and innovation in creating more adaptive and robust software systems that thrive in the face of stress and uncertainty.