A platform to share and reflect on my journey across the worlds of management, innovation, and social impact. Here, you'll find a collection of my management thoughts, highlights from my books, research contributions, and presentations, all rooted in years of academic and practical experience.
Whether you're a student, practitioner, policymaker, or fellow thinker, this space is designed to provoke thought, encourage dialogue, and contribute meaningfully to both academic and applied conversations in business and beyond.
PLEASE SIGN UP FOR THE NEWSLETTER TO BE CONSIDERED FOR A FREE COPY OF AN UPCOMING BOOK.
Two articles in Wall Street Journal (“Israel-Iran Conflict Spurs China to Reconsider Russian Gas Pipeline” on June 24, 2025 and “China to Block Its Rare Earth Experts from Spilling Their Secrets” on June 25, 2025) highlight actions planned/taken by China after considering the geopolitical landscape due to recent global events. China's strategic positioning in the global economy is increasingly influenced by current geopolitical tensions, particularly those involving the United States and its allies. The news articles relate to energy and rare earths sectors. Following up on my previous two posts, I will focus in this one on the energy and rare earths sectors within the second largest economy of the world.
Chinese Energy Sector
China’s energy sector is undergoing a significant transformation as the country balances its reliance on conventional energy with ambitious goals for renewable and clean energy. Coal remains a dominant energy source, making up about 60% of China’s electricity generation. However, the government is gradually reducing coal dependence by improving efficiency and transitioning to cleaner alternatives, in response to environmental concerns and carbon reduction targets.
China has become a global leader in renewable energy, particularly in solar and wind power. It is the largest producer of solar panels and the world leader in wind energy, with substantial investments in both onshore and offshore wind farms. Hydropower also plays a major role, with China hosting the world’s largest hydropower projects. The country aims to increase its renewable energy capacity significantly, targeting 20% of its energy consumption from non-fossil fuels by 2025, and carbon neutrality by 2060.
In addition to renewables, China is heavily investing in clean energy technologies like nuclear power, hydrogen, and energy storage. Nuclear power capacity is expanding, and the country is exploring next-generation nuclear technologies. Hydrogen, especially green hydrogen produced via renewable energy, is a growing focus. China is also a global leader in battery manufacturing, supporting the widespread use of renewable energy through advancements in energy storage solutions.
Energy security remains a critical concern due to China’s growing reliance on imported oil and natural gas. The Wall Street Journal article titled “Israel-Iran Conflict Spurs China to Reconsider Russian Gas Pipeline” points to a geopolitical realignment consistent with PESTEL analysis. China is considering revising the stalled “Power of Siberia 2” project due to concerns of reliability of oil and gas from Middle East. The Belt and Road Initiative and investments in energy infrastructure worldwide further reflect its strategy to secure energy supplies. China’s energy policies are influenced by global geopolitical factors, necessitating diversification of energy partnerships and resources. As the country continues its energy transition, technological innovation and international cooperation will be key to achieving a sustainable and secure energy future.
Chinese Rare Earths Sector
China’s rare earths sector plays a central role in the global supply chain, as the country dominates the production and processing of these critical materials. Rare earth elements (REEs) are essential for a wide range of technologies, including electronics, renewable energy systems, defense, and electric vehicles. According to the U.S. Geological Survey and corroborated by multiple reputable sources, China holds an estimated 37% of known global reserves (roughly 44 million tonnes out of a total ~110 million tonnes worldwide) and produces around 60-69% of the world’s rare earths, making it the largest player in this market.
The country has invested heavily in rare earth extraction and processing technologies, which has enabled it to maintain a commanding position in the global market. Most of China’s rare earth production is concentrated in the Bayan Obo mine in Inner Mongolia, one of the world’s largest deposits. Additionally, China’s rare earth sector benefits from a well-developed processing infrastructure, which adds significant value to raw materials before they are exported.
In recent years, China has focused on diversifying its rare earth supply chains and reducing its dependence on foreign countries for key raw materials. However, China’s dominance has led to geopolitical tensions, particularly with the U.S. and other Western countries, which are working to secure their own rare earth supplies. As a result, China has occasionally used rare earth exports as a leverage in trade disputes.
To maintain its competitive advantage, China is increasingly focused on expanding its rare earth recycling capabilities and improving the efficiency of rare earth extraction from secondary sources. Additionally, the country is advancing technologies to use rare earths in high-performance applications, such as electric vehicles, wind turbines, and advanced military equipment.
Looking forward, China’s rare earths sector will likely continue to face growing competition from countries like the U.S., Australia, and Japan, who are developing their own production and processing capabilities. China’s future strategies will likely include strengthening domestic mining regulations, pursuing international collaborations, and further advancing recycling technologies to ensure a sustainable and secure supply of rare earths. The article titled “China to Block Its Rare Earth Experts from Spilling Their Secrets” puts spotlight on the significance of knowhow as a strategic level in this geopolitical landscape.
PLEASE SIGN UP FOR THE NEWSLETTER TO BE CONSIDERED FOR A FREE COPY OF AN UPCOMING BOOK.
Operations and Supply Chain Management has long been a critical area of focus for businesses seeking to optimize performance and respond to evolving global challenges. Over the years, the field has been shaped by precise and structured analytical frameworks that offer clear methodologies for solving complex operational problems. These models, metrics, and methods provide a rigorous approach to driving efficiency and effectiveness in the movement of goods and services. However, what truly elevates operations and supply chain management from a technical discipline to a powerful strategic lever is its dynamic intersection with corporate strategy, global business trends, and geopolitical shifts. It is at this crossroads - where decisions around sourcing, production, and distribution intersect with broader economic, political, and cultural contexts - that supply chains become integral to a company's competitive advantage, influencing both tactical operations and long-term business strategy.
No wonder, then, that today supply chain management has emerged as a central topic of discussion across boardrooms, newsrooms, and academic forums alike. Once considered a behind-the-scenes function, supply chains are now recognized as critical enablers of competitive advantage - and, when disrupted, as potential sources of significant risk. The pandemic, geopolitical tensions, climate change, and rapid digital transformation have all thrust supply chain decisions into the strategic spotlight. From semiconductor shortages to container backlogs and the reshaping of global trade routes, the world has witnessed just how deeply supply chain dynamics influence economic resilience, national security, and everyday life. What was once a domain largely confined to operations specialists is now acknowledged as a vital field with far-reaching implications, demanding a broader, more integrated perspective - one that blends analytical rigor with strategic insight and global awareness.
In my previous post on the PESTEL framework, I highlighted six critical factors - Political, Economic, Social, Technological, Environmental, and Legal - that demand thorough and nuanced consideration in supply chain planning. Each of these dimensions alone warrants deep, ongoing analysis to navigate the complexities of today’s interconnected and fast-evolving global landscape. The challenge is only intensifying as the global context shifts rapidly, with new developments continuously reshaping risks and opportunities. In this post, however, I will narrow the focus to one particularly pressing element: geopolitical tensions. While the other PESTEL factors remain crucial, the escalating geopolitical landscape poses unique and immediate challenges. I will explore how these tensions are already impacting—and are poised to further disrupt - key sectors within the United States underscoring the urgent need for resilient, adaptable supply chain strategies tailored to an increasingly fragmented global order.
Energy Sector
The U.S. energy sector is navigating a volatile geopolitical landscape, influenced by the Russia-Ukraine war, Middle East instability, and China-Taiwan tensions. These events directly impact global energy markets, disrupting oil and natural gas supply chains, altering energy pricing, and shifting trade routes. The Russia-Ukraine conflict, in particular, has led to significant reductions in Russian energy exports, causing global price hikes and a surge in demand for alternative energy sources. The U.S. has benefited from these price increases, but long-term stability remains uncertain as geopolitical shifts continue.
Middle East tensions, particularly the Israel-Iran conflict, threaten vital energy shipping routes like the Strait of Hormuz. Any escalation could disrupt oil flows, driving up prices and jeopardizing U.S. energy security. Meanwhile, tensions over Taiwan could disrupt supply chains for critical energy technologies, including those essential for clean energy transitions, such as solar panels and electric vehicle batteries.
In response, the U.S. energy sector must balance short-term reliance on fossil fuels with a long-term push for clean energy sources. This includes expanding renewable energy production, investing in storage technologies, and enhancing grid resilience. Companies should also adopt financial hedging strategies to manage price volatility and develop advanced geopolitical risk models to anticipate disruptions.
Energy security is paramount, with a focus on strengthening domestic oil and gas production, expanding the Strategic Petroleum Reserve, and ensuring a steady supply of critical minerals for clean energy technologies. The U.S. must also strengthen international energy alliances and diplomatic efforts to secure energy trade routes.
As geopolitical risks grow, the U.S. energy sector’s future hinges on its ability to diversify energy sources, invest in renewable technologies, and maintain flexible, resilient supply chains that can withstand global disruptions.
Technology Sector
The U.S. technology sector is facing significant challenges and opportunities due to current geopolitical tensions, including the Russia-Ukraine conflict, China-Taiwan issues, and Middle East instability. These tensions disrupt global supply chains, particularly for semiconductors, 5G networks, cybersecurity, and AI technologies. The Russia-Ukraine war has impacted the supply of advanced technology components, while sanctions and cyberattacks raise concerns about data security. Meanwhile, escalating China-Taiwan tensions threaten global semiconductor supply chains, and trade conflicts between the U.S. and China add complexity to technology exports.
To mitigate these risks, U.S. tech companies should diversify their supply chains, particularly for semiconductors, by investing in manufacturing facilities in stable regions like South Korea, Singapore, and Europe. The U.S. CHIPS Act offers opportunities for domestic production, which could help reduce dependence on conflict-prone areas. Additionally, increasing investments in cybersecurity is critical to defend against rising cyberattacks, especially state-sponsored threats, and to secure sensitive data and infrastructure.
As export controls tighten, especially with China, U.S. firms should engage with policymakers to shape clearer trade regulations and diversify markets in regions such as Europe, India, and Latin America. The growing demand for clean and renewable technologies, accelerated by geopolitical risks, presents another opportunity. U.S. tech firms can lead in the green energy sector by investing in energy storage, EV technologies, and smart grids.
AI and automation are also emerging as vital growth areas. AI-driven solutions can assist in forecasting geopolitical risks and optimizing supply chains, while automation technologies can reduce reliance on unstable global supply chains. Long-term investments in next-gen technologies like 6G, quantum computing, and advanced robotics will also help U.S. firms maintain a competitive edge and reduce vulnerabilities to global disruptions.
Automotive Manufacturing
The U.S. automotive manufacturing sector is increasingly vulnerable to geopolitical instability, including the ongoing Russia-Ukraine war and Middle East tensions, which are disrupting global supply chains and trade routes. Freight costs are rising due to rerouted shipping lanes, and tariffs are complicating the sourcing of components and materials from traditional low-cost regions. Additionally, the shift towards electric vehicles (EVs) adds new challenges, with increasing reliance on critical minerals and battery components.
To mitigate these risks, U.S. automakers should focus on enhancing supply chain resilience by nearshoring key components, particularly in North America and allied nations, to reduce reliance on distant suppliers in Asia. Flexible manufacturing processes are also essential, allowing manufacturers to adapt quickly to disruptions in component availability, such as semiconductors or batteries.
Investment in EV and autonomous vehicle technology is crucial as the demand for electric powertrains and connected vehicles rises. Automakers should align their R&D efforts with this shift, while also partnering with clean energy companies and local governments to develop EV infrastructure. Additionally, the growing defense commitments, particularly with NATO, are expected to drive demand for military vehicles and advanced transport systems. Manufacturers can seize this opportunity by expanding production lines for military-spec vehicles and parts, ensuring a balance between defense and civilian production needs.
In summary, U.S. automotive manufacturers must adapt to geopolitical uncertainties by diversifying supply chains, investing in EV and autonomous vehicle technology, and leveraging defense-related production opportunities to maintain competitiveness and mitigate risks.
Hi-Tech Manufacturing
The high-tech manufacturing sector is experiencing significant disruptions due to rising tariffs, export controls, and the ongoing U.S.-China trade conflict, particularly affecting semiconductors, telecommunications equipment, and advanced materials. The semiconductor shortage has exposed the fragility of global supply chains, prompting companies to reassess their strategies.
To mitigate these risks, high-tech companies should diversify production and sourcing, especially for semiconductors, by expanding domestic manufacturing capabilities. Investments like Intel’s $20 billion Arizona facility, supported by U.S. government incentives such as the CHIPS Act, can secure supply chain stability and reduce reliance on Taiwan and China.
Firms should also explore vertical integration strategies, sourcing key materials and components from internal or closely allied regions, including rare earths and critical materials for electronics. This reduces exposure to global supply chain vulnerabilities and enhances control over production processes.
As digitalization increases, cybersecurity becomes a top priority. Protecting intellectual property and sensitive data from cyberattacks, particularly state-sponsored threats, is essential. Implementing a zero-trust security model and continuous cyber activity monitoring will be crucial to maintaining system integrity.
Finally, accelerating R&D in advanced technologies like artificial intelligence (AI), quantum computing, and 5G is vital to staying competitive. Collaborating with universities, research institutions, and the federal government will support innovation and help high-tech companies future-proof their operations.
In summary, to navigate geopolitical uncertainty, high-tech manufacturers must diversify supply chains, invest in domestic production, enhance cybersecurity, and prioritize R&D in emerging technologies to maintain a competitive edge.
Aerospace Manufacturing
The aerospace manufacturing sector is heavily impacted by global geopolitical events, particularly in defense and commercial aviation. The ongoing Ukraine conflict has emphasized the strategic importance of defense aerospace, while restrictions on technology transfers and shifting trade blocs have disrupted commercial aviation. Supply chain issues, particularly for critical components like titanium and avionics, remain a major concern.
To mitigate these challenges, aerospace companies should consider diversifying their production and sourcing activities to allied nations in North America and Europe, reducing exposure to geopolitical risks. Establishing new manufacturing plants or forming joint ventures in strategic locations will provide stability.
With rising NATO defense commitments, companies should leverage government defense contracts, particularly in military aviation. Expanding into advanced military products, such as drones, combat aircraft, and defense systems, will allow aerospace firms to balance the cyclicality of commercial demand with growing defense needs.
Investing in advanced materials, such as composites and titanium alloys, is essential for producing lightweight, fuel-efficient aerospace components. Additionally, increased R&D in autonomous flight, hypersonic technologies, and sustainable aviation fuel (SAF) will position companies for the future of air travel.
To enhance supply chain resilience, aerospace manufacturers should adopt a more flexible approach to sourcing and inventory management. Establishing secure supplier networks and increasing local sourcing of critical components like avionics and engines will help minimize disruptions.
In summary, aerospace manufacturers must diversify production locations, capitalize on defense contracts, innovate with advanced materials and technologies, and strengthen supply chain resilience to navigate geopolitical uncertainties and ensure long-term competitiveness.
Pharmaceuticals and Biotech Manufacturing
The pharmaceutical and biotech manufacturing sector is highly sensitive to geopolitical tensions, especially those affecting the global movement of raw materials like active pharmaceutical ingredients (APIs) and finished drugs. Disruptions in supply chains and tariffs on materials from conflict-prone regions can impact production schedules and costs. Additionally, increasing concerns over national security regarding healthcare products are influencing the cross-border flow of medicines.
To mitigate these risks, pharmaceutical companies should consider nearshoring key production facilities, particularly for APIs, to more stable regions such as North America and Europe. This strategy will reduce dependence on politically unstable regions while also cutting lead times and transportation costs.
Given the volatility of global supply chains, maintaining buffer stocks of essential raw materials and finished products, such as vaccines and emergency treatments, is crucial. This strategic stockpiling will help mitigate disruptions during times of crisis.
Pharmaceutical firms should also strengthen collaboration with government regulators to ensure the smooth approval and production of life-saving drugs, particularly during emergencies. By working closely with regulatory bodies like the FDA, companies can expedite production and distribution processes.
Finally, investing in R&D for next-generation treatments and vaccines is critical. Collaborating with universities, government research initiatives, and international health organizations can drive innovations in areas like gene therapy, personalized medicine, and other biotech fields, positioning companies to address emerging global health threats.
In summary, pharmaceutical and biotech manufacturers must diversify production, invest in agile manufacturing processes, strengthen regulatory collaborations, and prioritize R&D to navigate geopolitical uncertainties and ensure the continuous supply of critical healthcare products.
Financial Sector
The U.S. financial sector faces growing risks from global geopolitical instability, particularly oil price volatility linked to Middle East tensions and sanctions on Russia and Iran. Oil-price-driven inflation reduces household spending and consumer confidence, while complex regulatory burdens due to sanctions increase compliance costs for financial institutions. However, volatility also presents opportunities, as it drives higher trading activity and demand for advisory services on managing geopolitical risk.
To navigate these challenges, financial institutions should enhance their sanctions and anti-money laundering (AML) systems by investing in AI-driven compliance tools and integrating geopolitical risk tracking platforms. Regular audits and ongoing staff training on evolving sanctions are also crucial.
Banks should expand macroeconomic risk management tools, including geopolitical risk hedges and customized derivatives tied to energy prices, to help clients manage volatility from geopolitical events. Additionally, offering macroeconomic advisory services will enable institutions to support clients facing the impacts of inflation, oil price fluctuations, and trade restrictions.
As geopolitical conditions shift, banks should align their product offerings with market dynamics, such as creating funds focused on commodities or defense stocks. Impact investing and ESG-aligned products may also attract clients seeking stability. Advisory services for navigating trade risks and political instability will further position financial institutions as trusted partners in uncertain times.
Given the potential drop in consumer credit demand due to rising oil prices, financial firms should adjust credit offerings, offer flexible repayment options, and focus on alternative financing solutions like peer-to-peer lending.
Finally, financial institutions must ensure transparent communication during crises, helping customers understand market volatility and make informed decisions. Regular scenario planning and stress testing will help maintain financial stability amid geopolitical upheaval.
Retail Sector
Rising energy and shipping costs, fueled by ongoing geopolitical tensions and global supply chain disruptions, are significantly driving up consumer prices, especially for goods that rely heavily on transportation, such as groceries, apparel, and household essentials. This inflationary pressure is forcing consumers to adjust by reducing their basket sizes, creating a challenge for retailers to adapt quickly to changing demand and cost structures. Additionally, logistical bottlenecks and vulnerabilities in conflict-prone regions have made timely inventory replenishment increasingly difficult, further complicating cost management and operational efficiency.
To navigate these challenges, retailers must focus on optimizing their supply chains, employing transparent pricing tactics, and diversifying their sourcing strategies. First, optimizing supply chains for agility is crucial. Retailers should leverage advanced AI and machine learning tools to improve demand forecasting, which enables more accurate inventory management aligned with shifting consumer behavior. Flexible inventory models, combining just-in-time inventory with safety stock buffers, will help reduce holding costs while maintaining service levels. Additionally, diversifying logistics networks by incorporating rail, air, and last-mile delivery options can mitigate risks associated with delays or route closures. Strengthening supplier relationships through collaborative planning will ensure a rapid response to disruptions, and investing in regional warehousing will reduce reliance on vulnerable long-haul shipping routes.
Next, employing transparent pricing tactics is essential. Retailers should clearly communicate the impact of rising fuel and shipping costs on product prices through in-store signage, online messaging, and social media, helping to build trust and reduce price sensitivity among consumers. Dynamic pricing strategies, informed by real-time data, will allow for timely adjustments to keep margins intact while responding to fluctuations in supply costs. Offering value-based promotions on high-margin or staple goods, along with bundling products strategically, can also help maintain profitability while increasing basket sizes.
Finally, diversifying sourcing away from conflict-prone regions is a critical strategy. Retailers should expand their supplier base geographically by identifying stable regions like Southeast Asia, Latin America, and Eastern Europe, reducing dependence on any single country or trade route. Nearshoring initiatives in countries like Mexico or the U.S. can shorten supply chains, lower transportation costs, and improve supply reliability. Investing in real-time geopolitical risk intelligence platforms will help retailers assess supplier risks and proactively adjust their sourcing strategies. Working closely with suppliers to enhance crisis management plans and increase inventory buffers will help ensure resilience in the face of external shocks.
By focusing on these strategic levers, retailers can better navigate the pressures of rising costs and supply chain disruptions while maintaining customer satisfaction and profitability.
Agriculture Sector
The U.S. agriculture sector is increasingly vulnerable to global geopolitical events, including trade conflicts, sanctions, and natural disasters, which disrupt supply chains, increase input costs, and create volatility in commodity markets. The Russia-Ukraine conflict, for instance, has severely impacted global grain supplies, particularly wheat and corn, driving up prices but also complicating logistics for U.S. producers. Additionally, ongoing trade wars, such as those with China, add uncertainty to the agricultural export landscape, especially in sectors like soybeans, pork, and dairy. The implications of climate change further exacerbate these challenges, affecting growing seasons and agricultural output.
To mitigate these risks, U.S. agriculture businesses can pursue several strategies. First, diversifying export markets is key. Expanding trade relationships with emerging regions like Africa, Southeast Asia, and Latin America can reduce dependency on traditional markets and help stabilize exports. Actively engaging in trade agreements like the USMCA can also create more predictable access to global markets. Second, mitigating input cost volatility through vertical integration is crucial. By nearshoring or sourcing critical farming inputs domestically, producers can reduce reliance on foreign suppliers prone to geopolitical instability. Additionally, investing in sustainable farming practices, such as precision agriculture, can reduce input costs while increasing yields.
Third, leveraging advanced risk management tools will provide further stability. U.S. agriculture firms should use commodities hedging and futures markets to lock in prices and manage volatility. Climate risk insurance and export risk mitigation tools, such as political risk insurance, can protect against the financial impact of natural disasters and trade disruptions. Lastly, focusing on sustainability and innovation, such as investing in climate-resilient crops and water-efficient technologies, will enhance long-term resilience. Building consumer trust through transparency, showcasing the quality and stability of U.S. agriculture, and collaborating with governments and NGOs on global food security issues will strengthen the sector's position in an increasingly uncertain world.
Logistics Sector
The U.S. logistics sector faces significant challenges due to ongoing geopolitical instability, particularly from conflicts such as the Russia-Ukraine war, tensions between Israel and Iran, and trade disputes with China. These tensions disrupt global supply chains, affecting transport routes, labor availability, fuel prices, and shipping costs. The Strait of Hormuz, a key oil shipping route, is especially vulnerable to escalation in the Israel-Iran conflict, while the Ukraine war has disrupted agricultural and industrial supply chains. The overall impact on shipping is compounded by bottlenecks like those in the Suez Canal and labor shortages, further inflating costs.
To address these challenges, logistics firms must diversify transportation routes and modal options. Exploring alternative shipping lanes, such as Central Asian rail corridors or Southeast Asian maritime routes, can reduce dependency on the Middle East. Additionally, U.S. firms should adopt digital tools for real-time tracking and AI-driven logistics platforms to enhance supply chain visibility and manage risks more effectively. Predictive analytics can help firms anticipate disruptions and reroute shipments proactively.
Further resilience can be built through nearshoring or reshoring operations, bringing manufacturing and distribution closer to home. This reduces reliance on international routes susceptible to geopolitical disruptions. Investing in smart warehousing, automation, and AI-powered route optimization will also help mitigate delays and improve operational efficiency.
U.S. logistics companies should also strengthen cybersecurity to protect against rising state-sponsored cyber-attacks, especially as digital systems become more central to freight management. Ensuring compliance with international trade regulations and sanctions is critical, as geopolitical tensions often result in new embargoes or trade restrictions.
Strategic partnerships with multinational logistics providers and government agencies can offer additional support and expertise during times of crisis. Finally, managing fuel and energy costs through hedging, efficient fleets, and green logistics solutions will help logistics firms cope with the volatile energy market.
Defense Sector
The U.S. defense sector is facing growing pressures due to rising geopolitical risks, including the Russia-Ukraine war, tensions in the South China Sea and Taiwan Strait, and instability in the Middle East, particularly the Israel-Iran conflict. These geopolitical developments have led to increased military spending globally, particularly in regions facing direct threats, and a shift in defense priorities, creating both challenges and opportunities for U.S. defense contractors.
The Russia-Ukraine conflict has triggered a surge in defense spending, especially among NATO members, driving demand for advanced weaponry, air defense systems, drones, and surveillance technology. Similarly, rising tensions with China over Taiwan have spurred a demand for missile defense systems, advanced fighters, and naval assets. Instability in the Middle East, particularly around the Strait of Hormuz, has increased the need for missile defense and naval technologies to secure vital energy routes.
To capitalize on these developments, U.S. defense contractors should focus on expanding defense procurement, particularly in missile defense, intelligence, and cyber warfare capabilities. U.S. firms should align with NATO’s strategic priorities, ensuring their products meet the growing needs of NATO members and East Asian allies.
Geostrategic military alliances will be crucial, as defense firms must navigate complex export regulations while expanding technology transfers to key allies. Cybersecurity investments are also critical, with increasing cyber warfare threats from adversaries like Russia and China. Strengthening cyber resilience and investing in cutting-edge technologies will help the U.S. maintain its technological edge.
To mitigate supply chain disruptions, defense contractors should consider nearshoring critical components and increasing supply chain transparency using digital technologies like blockchain. Investing in advanced technologies such as hypersonic weapons, AI, and robotics will ensure U.S. defense superiority. Lastly, sustainability initiatives, including energy-efficient technologies and green defense practices, can help meet both operational and environmental goals.
PLEASE SIGN UP FOR THE NEWSLETTER TO BE CONSIDERED FOR A FREE COPY OF AN UPCOMING BOOK.
PESTEL analysis evolved from earlier environmental scanning tools through decades of refinement by strategic planners and academics. Francis Aguilar’s 1967 book “Scanning the Business Environment” introduced ETPS (Economic, Technical, Political, and Social) as systematic external analysis. This evolved into PEST, then PESTEL as strategists recognized that Environmental and Legal factors deserved separate attention. Unlike frameworks attributed to single authors, PESTEL emerged from collective practice as organizations sought structured approaches to comprehend increasingly complex external environments.
The framework’s enduring appeal stems from its comprehensive yet manageable scope. By organizing environmental factors into six distinct categories, PESTEL ensures systematic consideration of external forces while preventing overwhelming complexity. Each category contains multiple factors that vary by industry and geography, but the six-category structure provides consistent analytical discipline. This balance between comprehensiveness and practicality explains why PESTEL remains a cornerstone of strategic analysis despite its simplicity.
Political Factors encompass government actions and political conditions affecting business operations. These include government stability and policy continuity, taxation policies and fiscal measures, trade regulations and tariffs, labor laws and employment regulations, environmental regulations, and political tensions or conflicts. The rise of economic nationalism has elevated political factors' importance, as seen in Brexit's impact on UK businesses or U.S.-China trade tensions affecting global supply chains. Companies must increasingly navigate divergent political environments as globalization faces political headwinds.
Economic Factors examine macroeconomic conditions and trends shaping business conditions. Key considerations include GDP growth rates and economic cycles, inflation and interest rates, exchange rate fluctuations, unemployment levels and labor availability, income distribution and purchasing power, and credit availability. The 2008 financial crisis demonstrated how economic factors can rapidly reshape entire industries. More recently, pandemic-induced economic volatility has forced companies to build resilience against extreme economic swings while managing through unprecedented monetary and fiscal interventions.
Social Factors capture demographic, cultural, and societal trends affecting demand and operations. These encompass population demographics and generational shifts, lifestyle changes and consumer preferences, educational levels and skill availability, health consciousness and wellness trends, cultural values and social movements, and urbanization patterns. The rise of Generation Z with distinct values around sustainability and social justice forces companies to adapt products, marketing, and corporate positions. Social media amplifies social trends, making companies more vulnerable to rapid shifts in public sentiment.
Technological Factors assess how technology evolution creates opportunities and threats. Critical elements include automation and artificial intelligence adoption, digital transformation and platform emergence, cybersecurity challenges and data privacy, research and development intensity, technology transfer rates, and infrastructure development. The pace of technological change continues accelerating, with AI's recent emergence promising disruption comparable to the internet's impact. Companies must monitor not just technologies within their industries but adjacent innovations that might reshape competitive landscapes.
Environmental Factors have gained prominence as climate change and sustainability pressures intensify. Key considerations span climate change impacts and adaptation needs, resource scarcity and circular economy pressures, biodiversity loss and ecosystem degradation, pollution regulations and emissions standards, renewable energy transitions, and extreme weather events. Environmental factors increasingly intertwine with others—political (regulations), economic (carbon pricing), social (consumer preferences), and technological (clean tech innovations). Companies face growing pressure to address environmental impacts across value chains.
Legal Factors examine laws and regulations affecting business operations. These include employment and labor laws, consumer protection regulations, intellectual property regimes, antitrust and competition law, data protection and privacy regulations, and industry-specific regulations. The digital economy has spawned new legal challenges around data sovereignty, platform liability, and algorithmic accountability. Regulatory divergence across jurisdictions complicates compliance for global companies, particularly in emerging areas like AI governance and cryptocurrency regulation.
When to Use This Framework
PESTEL analysis proves most valuable during strategic planning cycles when organizations need systematic external environment assessment. Use it when entering new geographic markets to understand local operating conditions across all six dimensions. The framework helps when evaluating long-term investments by identifying external trends that might affect returns. Apply PESTEL when disruption signals appear to comprehend whether changes represent isolated events or systematic shifts.
The framework excels at preventing blind spots by forcing comprehensive environmental scanning. Use it to challenge internal assumptions about external conditions and to ensure strategic plans account for external realities. PESTEL particularly helps when building scenarios by identifying key external uncertainties across categories. It provides structure for organizing diverse external intelligence into actionable strategic insights.
Key Decisions It Clarifies
PESTEL illuminates critical decisions about market participation and strategic positioning. It clarifies which markets offer favorable conditions across multiple dimensions versus those with accumulating headwinds. The analysis reveals whether challenges are temporary or structural, guiding decisions about persistence versus pivot. By identifying early signals of change, PESTEL helps time strategic moves—when to accelerate investment or begin strategic retreat.
The framework guides capability development by highlighting which external changes require new organizational competencies. Rising environmental regulations might necessitate sustainability expertise. Technological shifts might demand digital capabilities. PESTEL also indicates where external partnerships could help navigate complex environments—local partners for political connections, technology partners for digital transformation, or NGO relationships for social legitimacy.
Evolution and Contemporary Applications
PESTEL analysis has undergone significant evolution since its origins in the 1960s, expanding from simple environmental scanning to sophisticated strategic intelligence gathering that addresses increasingly interconnected global challenges. Contemporary applications extend beyond traditional strategic planning to encompass risk management, stakeholder engagement, sourcing, and sustainability strategy development. The framework has adapted to address complex global challenges like climate change, digital transformation, and geopolitical fragmentation that require integrated analysis across all six factors simultaneously.
The rise of stakeholder capitalism has transformed PESTEL from a tool focused primarily on identifying external threats and opportunities to one that helps organizations understand their broader societal context and responsibilities. Modern PESTEL analysis incorporates environmental, social, and governance (ESG) considerations that affect not just operational conditions but also organizational legitimacy and stakeholder relationships. This evolution reflects growing recognition that external environment analysis must consider multiple stakeholder perspectives rather than just shareholder interests.
Globalization has created new applications for PESTEL analysis that address how local, regional, and global factors interact to create complex operating environments. Multinational organizations use the framework to understand how global trends manifest differently across local markets while identifying systemic risks that transcend geographic boundaries. The framework now addresses cultural differences, regulatory harmonization challenges, and cross-border spillover effects that earlier applications largely ignored.
The acceleration of change across all PESTEL dimensions has led to more dynamic applications that emphasize continuous monitoring rather than periodic analysis. Organizations increasingly use the framework for real-time environmental scanning that identifies emerging trends before they become established patterns. This shift toward continuous intelligence gathering reflects recognition that external environments change faster than traditional strategic planning cycles can accommodate.
Digital Age Application
Digital transformation has made PESTEL analysis both more critical and more complex. The pace of change across all six factors has accelerated, requiring more frequent analysis updates. Digital technologies create cascading effects—technological changes trigger regulatory responses, reshape social behaviors, and disrupt economic models. Real-time data enables continuous environmental monitoring rather than periodic analysis, but risks information overload without disciplined frameworks.
Political factors increasingly include digital sovereignty, data localization requirements, and algorithmic regulation. Economic factors must consider digital economy dynamics like platform economics and cryptocurrency impacts. Social analysis requires understanding online community dynamics and viral social movements. Technological scanning expands beyond IT to include biotech, cleantech, and other converging technologies. Environmental factors incorporate digital sustainability like data center energy consumption. Legal frameworks struggle to keep pace with digital innovation, creating regulatory uncertainty.
The interconnectedness of PESTEL factors intensifies in digital contexts. Social media movements (Social) drive political action (Political) leading to new regulations (Legal) that affect technology deployment (Technological) with economic consequences (Economic) and environmental implications (Environmental). This systemic complexity requires more sophisticated analysis that examines factor interactions, not just individual trends.
Common Misapplications and Limitations
The most common misapplication involves superficial analysis that lists factors without assessing strategic implications. Many PESTEL analyses become academic exercises cataloging trends rather than tools for strategic decision-making. Each identified factor should link to specific strategic opportunities or threats. Without this connection, PESTEL devolves into interesting but irrelevant environmental commentary.
Information overload represents another challenge. Comprehensive analysis across six categories can generate overwhelming detail that obscures critical insights. Effective PESTEL requires prioritization—identifying which factors most affect strategy rather than documenting every possible influence. The framework provides categories but not criteria for determining materiality. Strategic judgment must filter noise from signal.
The framework's macro focus can miss industry-specific dynamics that matter more than broad trends. While demographic aging affects many industries, its specific implications vary dramatically between healthcare and entertainment. Generic PESTEL factors require translation into industry-specific impacts. The framework also struggles with factor interactions—how technological change enables new business models that trigger regulatory responses.
Geographic scope creates analytical challenges for global companies. Should analysis occur at global, regional, or national levels? Different geographic markets show different PESTEL profiles, but analyzing each market separately creates unwieldy complexity. Companies must balance comprehensive coverage with analytical practicality, often focusing detailed analysis on core markets while monitoring broad trends elsewhere.
Data mining enables computers to learn how to make informed decisions based on data. These decisions can range from forecasting tomorrow's weather and filtering out spam emails to identifying the language of a website or even suggesting compatible matches on dating platforms. The scope of data mining applications is vast and continually expanding as new uses are discovered.
We now live in an era characterized by the relentless generation of data. While many refer to this period as the "information age," it might be more accurate to describe it as the age of data. Every day, enormous volumes of data—ranging from terabytes to petabytes—are produced and transmitted across computer networks, websites, and various devices. This data explosion stems from increasing digitization and the advancement of technologies in computing, sensing, data storage, and dissemination.
Around the world, businesses create enormous data sets from activities such as sales transactions, stock market operations, product listings, marketing efforts, corporate performance tracking, and customer reviews. In the scientific and engineering domains, petabytes of data are routinely produced by tools like remote sensors, measurement instruments, experiments, and environmental monitoring systems. The medical and biotech sectors add to this deluge with data from genome sequencing machines, lab reports, electronic health records, patient monitoring systems, and diagnostic imaging. Meanwhile, search engines handle billions of queries daily, processing many petabytes of information. Social media platforms contribute significantly too, generating a massive flow of text, images, videos, and forming new digital communities and networks. Clearly, the number of sources producing vast data is virtually limitless.
This rapidly expanding, highly accessible, and massive volume of data defines our present as the true data era. To harness value from this data, we need robust and adaptable tools that can automatically identify meaningful patterns and translate raw information into structured knowledge. This demand is what gave rise to the field of data mining.
At its core, data mining is the process of uncovering significant patterns, trends, and knowledge from large data collections. The term “data mining,” first popularized in the 1990s, evokes the imagery of searching for gold nuggets within mountains of rock—though perhaps a better label might have been “knowledge mining from data.” However, that phrase is lengthy, and alternatives like “knowledge mining” do not fully convey the focus on analyzing large-scale data. Despite being somewhat of a misnomer, the term “data mining” gained popularity for its vivid metaphor. Related terms include knowledge discovery from data (KDD), pattern recognition, data analytics, knowledge extraction, information harvesting, and data archaeology.
Data mining is still a relatively young discipline, but it’s rapidly evolving and holds great promise as we move from an era saturated with data into one guided by insight and information.
Some view data mining as synonymous with knowledge discovery from data (KDD), while others regard it as one crucial phase within a larger KDD process. This broader process typically involves the following iterative steps:
Phases in the Knowledge Discovery Process:
Data Preparation
Data Cleaning: Eliminating errors, noise, and inconsistencies.
Data Integration: Merging data from multiple sources, often as a preliminary step before loading into a data warehouse.
Data Transformation: Structuring or summarizing data into formats suitable for mining, which may involve aggregation.
Data Reduction: Reducing data size while preserving its integrity.
Data Selection: Extracting data relevant to the specific analysis task.
Data Mining
The core stage where advanced algorithms and intelligent techniques are applied to find patterns or develop models. This stage often involves methodologies from machine learning, statistics, computer science, optimization, and domain-specific fields like biology, linguistics, or urban planning.
Pattern/Model Evaluation
Identifying the most meaningful and valuable patterns or models using predefined measures of interest or utility.
Knowledge Presentation
Communicating the discovered insights through visualization, summaries, or other knowledge representation techniques.
While this structure positions data mining as one part of the KDD pipeline, in many real-world settings—especially in business, media, and academia—the term data mining is used interchangeably with the entire knowledge discovery process. Because it's simpler and widely recognized, we often adopt this broader interpretation.
In summary, data mining refers to the overall process of detecting valuable knowledge and patterns within vast datasets. These datasets may reside in traditional databases, data warehouses, web platforms, or be generated in real time through streaming systems.
At its 2022 annual meeting, the Surgical Outcomes Club—a leading consortium of surgeons and health services researchers dedicated to advancing surgical outcomes science—hosted a panel of four experts to discuss the growing role of predictive analytics and artificial intelligence (AI) in surgical research. The discussion centered on three core domains where AI is poised to make a significant impact: computer vision, digital transformation at the point of care, and the utilization of electronic health records (EHR) data. The panel addressed both the opportunities and inherent challenges associated with integrating AI into surgical practice.
Computer Vision: Giving Machines Eyes in the OR
The increasing capture of surgical video—routinely generated during minimally invasive and robotic procedures, and now expanding into open surgeries—offers a new frontier for AI. Real-time video annotation powered by computer vision can help evaluate surgical performance, identify complex anatomy, and provide intraoperative feedback to mitigate technical errors. Beyond performance assessment, this technology holds promise for surgical education, allowing skills training and behavior review through tool and hand tracking, and phase annotation.
With the advent of convolutional neural networks and other advanced models, video-based AI tools can now approach the visual complexity of surgery. These innovations can enhance surgeon training and potentially assist with decision-making during procedures. However, real-world implementation remains limited by several barriers, including the complexity of surgical environments, insufficient generalizability of current models, and a lack of large, annotated, and diverse datasets. Data sharing limitations and institutional barriers further complicate the creation of robust open-source datasets.
While current efforts rely on public video sources of inconsistent quality, a recent consensus suggests retrospective training tools may be feasible within two years, and real-time applications may emerge within the next decade. Recognizing the early-stage nature of this field is critical to fostering collaboration between surgeons and engineers, ensuring that AI tools ultimately support, rather than replace, surgical expertise.
Building Surgical Intelligence Through Video-Based Analytics
Video analytics offer the potential to assist surgeons during operations by identifying key anatomical landmarks, outlining tumor margins, or analyzing instrument usage patterns. Particularly in rare or unexpected events—such as intraoperative bleeding—AI could provide scenario-based recommendations to guide next steps. Though not yet deployed in operating rooms, emerging research outlines how real-time decision support systems could soon become a reality.
Leveraging Data for Surgical Innovation
Despite the influx of new surgical devices, there's limited insight into how they compare to existing techniques. Video analysis can quantify how new technologies influence surgical workflow and learning curves, particularly during the adoption of minimally invasive or robotic procedures. Side-by-side comparisons of similar cases can inform best practices and highlight improvements or setbacks introduced by novel tools.
Surgical video provides “ground truth” data, offering unparalleled insights into intraoperative behavior. Beyond enhancing individual performance, these data can serve broader purposes—from reducing OR inefficiencies to defending medical decisions in legal contexts, and informing medical device development. However, building the necessary infrastructure for data processing and analytics requires technical expertise far beyond traditional clinical training. Successful implementation hinges on interdisciplinary collaboration between clinicians and data scientists.
Addressing the Complexities of Surgical Video Analysis
While promising, the use of surgical video raises significant legal, ethical, and logistical questions. Ownership of the footage remains unclear, and concerns about patient privacy, staff exposure, and potential misuse can discourage open sharing—especially in complex or unfavorable cases. Nonetheless, video evidence can also serve as proof of adherence to standard care protocols.
Other concerns include potential conflicts of interest, data security, and a disconnect between those who generate the data (surgeons) and those capable of analyzing it (engineers). Despite these challenges, carefully selected use cases and clear goals can help bridge these divides.
Calls to Action: Accelerating AI Integration in Neurosurgery
The increasing public awareness of AI offers a strategic moment to integrate it meaningfully into neurosurgical practice. Four key actions are proposed:
Establish AI Task Forces: Professional societies such as the American Association of Neurological Surgeons (AANS) and the Congress of Neurological Surgeons (CNS) should form joint task forces to define best practices, set data standards, and facilitate clinician-scientist collaboration. Subspecialty task forces should address domain-specific use cases, backed by dedicated research funding and aligned with interdisciplinary partners across related surgical and technical fields.
Create Multi-Institutional Research Organizations: Single-institution efforts lack the scale and diversity needed to train robust AI models. Instead, we should foster independent, multi-institutional research entities—either nonprofit or for-profit—that can secure funding, manage cross-institutional data, and develop reusable tools for ML integration.
Launch Conferences and Challenge Frameworks: There is a need for clinician-led conferences and grand challenges to define and advance AI use cases in surgery. Inspired by the “Common Task Framework” (CTF) model, such initiatives can attract diverse collaborators and reward clinically meaningful innovation. Dedicated tracks within surgical and technical conferences can help bridge the divide between these communities.
Standardize Data Capture and Sharing: Video is just one of many valuable data streams in the modern OR. Integrating and standardizing these streams for AI use remains a challenge due to technical and regulatory hurdles. Collaborative efforts between surgical and technical communities, supported by new regulatory frameworks, can unlock the potential of OR data for clinical improvement.
By aligning clinical insight with technical innovation, the surgical community can unlock the transformative potential of AI. Through multidisciplinary efforts, structured collaboration, and a shared vision, we can bring next-generation surgical analytics from concept to clinical reality—benefiting patients, providers, and the entire healthcare ecosystem.
As an example, a study developed an AI-ready dataset for model training by programmatically querying open surgical procedures on YouTube, selecting and manually annotating a subset of videos. This dataset was used to train a multitask AI model, subsequently applied in two proof-of-concept studies: (1) to generate “surgical signatures” that characterize procedural patterns, and (2) to identify hand motion kinematics indicative of surgeon experience and skill level.
The resulting Annotated Videos of Open Surgery (AVOS) dataset comprises 1,997 videos spanning 23 procedure types, sourced from 50 countries over a 15-year period. To test real-world applicability, additional deidentified surgical videos were prospectively collected from a tertiary academic medical center (Beth Israel Deaconess Medical Center [BIDMC]), with IRB approval and patient consent.
Multitask Model Architecture and Training
A multitask neural network was trained on the AVOS dataset to perform spatiotemporal analysis of hands, tools, and actions in surgical video. The model captured procedural flow and fine motor behaviors in near real time, enabling simultaneous analysis across multiple tasks. To improve generalizability across varied operative conditions, data augmentation techniques—including flipping, scaling, rotation, and occlusion testing—were applied during training. An alternating task training strategy was used to optimize both spatial and temporal branches, with a dedicated training stream for hand-pose estimation.
Inference was performed by extracting batches of four frames at five-second intervals from each video. Background actions were filtered to ensure consistent comparisons across procedures.
Proof-of-Concept: Generating Surgical Signatures
The model was tested on previously unseen videos of appendectomies, pilonidal cystectomies, and thyroidectomies—procedures well-represented in the AVOS dataset. These videos were manually reviewed to confirm the presence of key operative steps, with durations ranging from 2 to 30 minutes. Using temporal averaging of model outputs, distinct surgical signatures were generated for each procedure, reflecting expected progressions in tool use and action (e.g., from cutting to suturing).
These signatures serve as procedural benchmarks, and significant deviations from them may reflect disruptions in surgical flow, variations in technique, or complexity in a given case. This functionality offers the potential for early detection of surgical anomalies or challenges requiring expert intervention.
Proof-of-Concept: Quantifying Surgical Skill
To assess skill, the model was retrospectively applied to 101 prospectively collected surgical videos at BIDMC, including live procedures and simulated wound closures. Participants included 14 operators categorized as either trainees (medical students, residents) or experienced surgeons (fellows, attendings). Hand movements were tracked using bounding boxes and nine anatomical key points (thumb, index finger, and palm).
Kinematic metrics—including velocity, rotation, and translation—were extracted and summarized into a single compound skill score using principal component analysis. Logistic regression analysis showed this compound feature significantly predicted surgeon experience, with each unit increase associated with a 3.6-fold increase in odds of being an experienced surgeon (95% CI: 1.67–7.62; p = 0.001).
Implications for Surgical Education and AI-Augmented Assessment
The multitask model demonstrated procedure-agnostic capabilities, performing reliably across variable video conditions such as lighting and camera angles. The ability to analyze both procedural flow and individual surgeon behavior marks a major advance toward automated, objective surgical feedback.
By linking hand motion patterns to surgical expertise, the model offers actionable insights for training. For instance, AI-driven feedback on motion economy and steadiness could allow trainees to iteratively improve performance, aligning with best practices observed in expert surgeons. This scalable, unbiased approach to skill assessment may facilitate faster and more reliable surgical training, especially in simulation-based environments.
Product-Service Systems (PSS) are business models that go beyond delivering physical products by also offering complementary intangible services. With the rise of smart technologies and connected devices, these models have evolved into Smart Product-Service Systems (Smart PSS), enabling service providers to deliver personalized and data-driven offerings. By leveraging user-generated data from smart products, providers can tailor services to meet individual customer needs more effectively.
To harness the potential of smart devices in enhancing existing services or developing new ones, a structured Smart PSS design methodology is essential. Such a method enables enterprises and service providers to build personalized Smart PSS solutions using user-generated data, or to enhance current offerings through the application of deep learning techniques.
As a first step, service providers should adopt the customer's perspective to identify the root causes of any inconvenience experienced during service use. To support this, the use of a customer journey map is recommended as a tool to analyze and understand the customer’s mental and emotional experience throughout their interaction with a product or service.
A customer journey map is a visual method that outlines the steps a customer takes, combined with their subjective feelings at each stage. By illustrating the entire service experience through graphics and process flows, it helps organizations see their services through the eyes of the customer. This perspective enables companies to pinpoint weaknesses in the service process and use these insights as a foundation for improvement.
Furthermore, as many customer interactions today are closely tied to data collection, traditional customer journey maps may need to be adapted to incorporate data-driven insights that better align with the evolving needs of modern service providers. A modified customer journey map is needed to assist service providers in analyzing the customer experience, as illustrated in the following figure.
Two key enhancements include: the inclusion of “needed data” at each stage of the journey, and the addition of emotional states alongside their root causes, forming the basis of an emotional journey.
This modified approach is explained step by step in the following sections:
Service providers should place themselves in the role of a customer and break down the service process into distinct stages, reflecting how the customer experiences the journey.
Identify the key activities and touchpoints customers engage with at each stage. These are critical interactions that shape the overall experience and must be included in the journey map.
Consider the emotions customers might experience during each activity. Providers should empathize with the user to understand these feelings. Based on this, an emotional journey can be drawn to represent the fluctuations in customer sentiment. In this version, both the emotional state and its underlying causes are mapped, providing deeper insight into customer behavior.
Recognize the data customers rely on to complete key activities. This “needed data” should be clearly identified to help service providers understand what information supports each step of the journey.
Finally, service providers should reflect on potential problems or pain points within the process and explore corresponding solutions to enhance the customer experience.
The modified customer journey map is also applied to uncover user requirements across various industries, recognizing that different sectors may face unique service-related pain points. However, certain factors—such as system usability, user interface (UI) design, and pricing—remain consistently influential in shaping customer satisfaction across industries.
Therefore, when designing new services, providers must consider both industry-specific challenges and universal customer expectations.
In summary, the modified customer journey map allows service providers to gain a comprehensive understanding of the user experience, including the emotions associated with each step of the service process. By capturing these emotional insights, providers can more easily identify potential issues and opportunities for improvement, enabling them to make targeted adjustments that better align with customer needs and expectations.
After identifying user requirements, service providers must determine which pain points they aim to address or how to adapt existing services to better fulfill those needs. Once the specific challenges are defined, the next step is to gather the necessary data to support service redesign or improvement.
Data can be sourced from a variety of channels, including social media posts and replies, instant customer feedback, and open datasets provided by governments or academic institutions. Thanks to the widespread availability of online data, much of this information is easily accessible, well-organized, and often available at low cost.
However, publicly available data may not always provide the depth or specificity required for targeted service enhancements. To overcome this limitation, service providers are encouraged to collect their own data tailored to their objectives. For instance, one might scrape restaurant reviews from platforms like Google Maps to gain insights into customer sentiment and expectations.
The benefit of collecting data independently is that it can be customized to meet specific demands. However, this approach can also be time-consuming and resource-intensive, requiring careful consideration of the trade-offs between data quality and operational cost.
To effectively meet diverse customer needs, service providers must develop appropriate models that align with the data they’ve collected. Before selecting a suitable model, it is essential to clearly define the target problem and identify the type of data to be used. However, traditional machine learning models often struggle with unstructured data—such as text, images, and audio—due to their lack of inherent structure.
Focusing on text data one could demonstrate the advantages of integrating smart Product-Service Systems (Smart PSS) with deep learning techniques. Specifically, we utilize the Doc2Vec (Document-to-Vector) model, a deep learning method that extends the Word2Vec approach, and implement it using the Gensim® library.
Doc2Vec is an unsupervised learning algorithm designed to convert entire texts into fixed-length vector representations. It learns these representations by predicting words based on their surrounding context, leveraging the idea that the contextual relationships in a sentence can reveal its semantic structure. During training, sections of sentences are masked (hollowed out), and the model learns to predict these missing words using the surrounding context—without requiring labeled data.
Additionally, Doc2Vec incorporates a paragraph matrix that serves as a unique identifier for each document, capturing global information and maintaining coherence across different parts of the same text. This enables the model to retain not just statistical, but also semantic and contextual meaning, even across documents of varying lengths.
Compared to traditional techniques like TF-IDF, which rely heavily on frequency counts and disregard word order or meaning, Doc2Vec provides a richer representation of text. Unlike Word2Vec, which computes document vectors as averages of individual word vectors, Doc2Vec takes into account the structure and flow of entire paragraphs, making it more effective in capturing the topic or theme of a document.
Once trained, the resulting document vectors can be used for various tasks such as document classification, clustering, and similarity comparison. A commonly used technique for comparing these vectors is Cosine Similarity, which measures the angle between two non-zero vectors, offering a metric for semantic similarity between texts.
The final step focuses on encouraging service providers to develop a complete and practical solution tailored to customer needs. These solutions can take various forms—web platforms, mobile applications (such as Android apps), and other digital interfaces are commonly used to deliver services effectively.
For instance, the figure below illustrates the workflow of a smart Product-Service System (Smart PSS) designed for tourist recommendations, implemented by a network of independent taxi operators.
The system employs an Android application as the front-end interface for the proposed Smart PSS, while the back-end is powered by a Python server integrating a Doc2Vec model. The interaction process unfolds as follows:
In the first step (arrow 1), users input three types of information via the app: their current location, preferred travel distance, and a text description of personal preferences related to attractions and restaurants.
This information is then transmitted to a PHP and Python server (arrows 2, 3, and 4), where it is used as parameters to compute similarities with surrounding attraction data (arrow 5).
The top 5 most relevant results, based on similarity calculations, are sent back to the app (via arrows 6 and 7), and finally, the results are displayed to the user (arrow 8).
To enhance user experience, the recommendation system is also integrated with the Google Directions API to offer navigation services. It generates a personalized full-day itinerary, including suggestions for morning and afternoon attractions, as well as lunch and dinner spots.
The detailed workflow is as follows:
Step 1: Users define a preferred travel radius and select types of attractions they are interested in.
Step 2: They enter a brief text describing their preferences (e.g., “quiet places with scenic views”).
Step 3: If users choose to skip planning for any part of the day (morning, lunch, afternoon, or dinner), they can do so.
Step 4: The application uses the user’s current location to set the search radius.
Step 5: For each attraction within that radius, the system retrieves the top five user reviews from Google Maps, then converts both the user’s input and the review texts into 50-dimensional vectors using the Doc2Vec model.
Step 6: Using cosine similarity, the system compares the user's preferences to each attraction's reviews and ranks the nearby options accordingly.
Each recommended attraction includes details such as the name, user rating, estimated travel time, and distance.
From the ranked list, the system presents the top 10 recommended attractions, allowing users to make selections for each time period. Once selections are made, a custom day plan is generated and route-planned using the Google Maps API, which defaults to the shortest driving route. To support diverse travel preferences, the app also includes a transportation mode selector, enabling users to switch between walking, public transport, biking, or driving.
A traditional warehouse typically includes several core functions such as receiving, storing, tracking and tracing, picking, and shipping. It also involves interactions with both upstream and downstream stakeholders, as well as a centralized management system. While a planning function is recommended, it is often considered optional in traditional warehousing setups. When goods arrive at a warehouse, they need to be tagged. Tagging can occur at multiple levels—truck, pallet, tray, or individual product—but pallet and tray tagging are the most commonly used due to their efficiency and ability to distinguish between different product types. Once tagged, goods are stored either manually or through automated systems like conveyor belts. If the incoming products are hazardous, compatibility checks are performed before storage to ensure safe handling.
In traditional warehouses, tracking is usually done manually using handheld scanners. The planning department plays a key role in determining the order fulfillment schedule and allocating the necessary resources. After planning, the order-picking process begins, which is also typically manual. Orders can be picked by entire trays or pallets, but often require manual assembly of different items to fulfill custom orders. Once picked, orders are packed and shipped to the customers. The warehouse supply chain involves both upstream and downstream stakeholders. Upstream stakeholders, typically suppliers, are responsible for delivering goods to the warehouse. Downstream stakeholders, usually customers, place orders that the warehouse fulfills. In some cases, the same entity may serve as both an upstream and downstream stakeholder—for example, manufacturers storing surplus products in warehouses. Clear communication with stakeholders is crucial, particularly when notifying them about order readiness, inventory levels, or any disruptions in the delivery process.
Effective warehouse operations also require a robust management system that covers finance and accounting, data processing, and sales management. Sales management focuses on handling orders and inventory, ensuring supply meets demand. Finance and accounting oversee the warehouse's financial health and ensure operational continuity. Data processing manages information from scanners and other sources, working closely with sales management to maintain accurate stock levels and support efficient supply intake.
While smart warehouses share many of the same high-level components as traditional warehouses, smart warehouses make planning a mandatory component and incorporate a warehouse communication network for enhanced coordination and efficiency. Each feature in the top layer has various optional and obligatory elements.
A smart warehouse functions through the seamless communication and integration of multiple systems. Within its business process management (BPM) structure, various technologies support the overarching warehouse management system (WMS). Advanced Planning and Scheduling (APS) software is used to oversee the planning and scheduling operations, ensuring that resources are optimally allocated. Simultaneously, an inventory management system is employed to maintain optimal stock levels and streamline inventory control. Financial and sales management modules handle all monetary interactions—ranging from processing incoming orders to managing restocking events—ensuring smooth financial operations throughout the warehouse. Order picking is directed by an Automated Storage and Retrieval System (AS/RS), which interfaces with Automated Guided Vehicles (AGVs) and other material-handling equipment to carry out tasks efficiently. Furthermore, a Transport Management System (TMS) is integrated into the process, coordinating with AGVs to prepare and load shipments onto trucks.
The figure outlines three key roles in the system—represented as swimlanes: the Supplier/Client, the Warehouse Management System (WMS), and the Warehouse. The client initiates a request, which is processed by the supplier. The WMS manages the planning and coordination of that request, while the Warehouse role executes the operational tasks. Each role consists of specific actions that may trigger related tasks across the system.
Architecture design of smart warehouses need to consider various viewpoints. In the following context diagram for a smart warehouse, the Warehouse Management System (WMS) serves as the central hub, orchestrating the core operations of receiving, storing, picking, and shipping goods. The WMS interfaces with a range of human operators, including corporate supervisors, warehouse managers, and floor-level employees. These users interact with the system to log actions, monitor activities, make operational decisions, and extract performance reports from the collected data. Depending on warehouse policy, truck drivers may also interact with the WMS as external operators.
To function effectively, the WMS depends on real-time data inputs gathered through scanners and sensors. These devices are often mounted on Automated Guided Vehicles (AGVs) or embedded within Augmented Reality (AR) systems. The Transport Management System (TMS) uses this data to guide AGVs and employees—via AR devices—to specific locations within the warehouse. This operational flow is further supported by the Advanced Planning and Scheduling (APS) system, which determines which goods should be retrieved. In some configurations, an Order Picking Operation System (OPOS) is also integrated to enhance the efficiency of picking tasks, and shelves may be outfitted with RFID tags to enable adaptive, self-adjusting storage.
To further enhance warehouse performance, Multi-Agent System (MAS) methodologies can be implemented. In such setups, robotic agents operate collaboratively within a distributed architecture. Intelligent agent-based communication allows for decentralized task allocation, optimizing efficiency through distributed algorithms. This approach helps to reduce battery usage and latency while maximizing utilization through task decomposition techniques.
The finance/accounting and sales management components manage all monetary and transactional functions within the warehouse. These include procurement, order processing, employee payroll, and broader financial oversight. In an optimized smart warehouse, seamless information sharing occurs between internal systems and external partners, enabled through automated data exchanges and system integration. This real-time visibility enhances responsiveness, reduces uncertainty, and helps maintain operational stability.
Reliable communication is critical for the success of smart warehouse operations. To ensure low-latency and high-reliability connections, the WMS communicates with other systems via robust 4G LTE and 5G networks. These technologies provide the infrastructure needed for agile, responsive operations within the Industry 4.0 framework. Additionally, cloud-based smart warehouses often leverage hardware virtualization to enable resource pooling, improving scalability and resource efficiency.
The decomposition view shown below outlines all essential modules required for smart warehousing. It includes not only the top-level functional components but also the sub-modules associated with each enabling technology. Key technologies featured in this view include barcoding, Augmented Reality (AR), Automated Guided Vehicles (AGVs), the Internet of Things (IoT), Warehouse Management Systems (WMS), scanning, RFID, and communication infrastructure.
For example, within the IoT module, several sub-components are identified: Artificial Intelligence (AI), Ambient Intelligence (AmI), a Security Module, and Real-Time Information processing. These elements highlight the critical considerations a smart warehouse designer must account for when implementing IoT-based solutions.
The AI sub-module encompasses techniques such as machine learning and deep learning algorithms, which enable predictive analytics, anomaly detection, and adaptive process control. Closely related, Ambient Intelligence (AmI) builds on AI, sensor networks, and pervasive computing to create an environment that dynamically adapts to the needs and behaviors of users and stakeholders. In this context, AmI contributes to making the warehouse responsive, intelligent, and context-aware.
Real-time information gathered from distributed sensors and devices supports the execution of both AI and AmI functions, enabling timely decision-making and responsive automation. A critical aspect of any Industrial IoT system is security—hence, the inclusion of a dedicated Security Module. This module ensures that proper security controls are in place to protect IoT devices, data transmissions, and system integrity from potential threats.
Together, these sub-packages form the technological backbone of a smart warehouse, ensuring that operations are not only efficient and intelligent but also secure and adaptable to evolving operational demands.
The uses view is presented below. The two core modules in the smart warehouse architecture are the Warehouse Management System (WMS) and the warehouse communication network. The WMS serves as the central control unit, interacting with various back-end systems and external modules to coordinate operations. The communication network functions as a critical link between the WMS and front-end technologies such as Augmented Reality (AR) hardware and Automated Guided Vehicles (AGVs). Additionally, other systems—including the Transport Management System (TMS), Internet of Things (IoT) devices, RFID infrastructure, and various external platforms—connect directly to the communication network to enable seamless data exchange and real-time coordination across the warehouse ecosystem.
Finally, in the deployment view, the mapping of software modules to their corresponding hardware components is illustrated. The data processing module is deployed on both the Warehouse Management Server and Warehouse Manager nodes, which serve as the central hubs for managing operations and analytics. Additional nodes are dedicated to cameras, sensors, scanners, and augmented reality (AR) hardware.
Automated Guided Vehicles (AGVs) are equipped with their own onboard cameras, sensors, and scanners, enabling autonomous navigation and interaction with the warehouse environment. Similarly, smart shelves are outfitted with embedded sensors to monitor inventory and environmental conditions. Each of these hardware components is represented as a separate node within the deployment architecture, emphasizing the distributed and interconnected nature of the smart warehouse system.
Artificial intelligence (AI), machine learning (ML), and data-driven techniques can be used to support and optimize the manufacturing of composite materials. The full model development lifecycle encompasses analysis, optimization, inverse problem-solving, and experimental validation. The Industry 4.0-inspired modeling pipeline tailored to composite processing combines both data-centric and model-centric knowledge engineering practices, with an emphasis on embedding domain expertise directly into the dataset curation and model development phases.
A high-level Industry 4.0 (I4.0) framework is composed of three interrelated sub-systems:
(a) Data acquisition from cyber-physical systems, which may include a combination of real-time production data and simulated outputs; (b) A data pipeline that manages key preprocessing tasks such as data cleaning, dimensionality reduction, storage, and other logistical operations necessary to ensure data usability; (c) A model development pipeline, which involves filtering incoming data, constructing relevant knowledge datasets, building and validating predictive models of the manufacturing system, and ultimately deploying these models to support real-time decision-making on the shop floor.
Focusing on the sub-system (c), the proposed framework builds upon the CRISP-DM (Cross-Industry Standard Process for Data Mining) methodology, a widely adopted and open-standard model for data analytics. It provides a structured, iterative approach to data-driven problem-solving, making it especially suited for dynamic, evolving manufacturing environments.
A key element in data-driven modeling is the anomaly detection system, which ensures that incoming data aligns with the distribution of previously collected data. This validation step can be implemented using a variety of techniques, such as classification, clustering, deep learning, or statistical models. If the chosen method determines that a new data point fits within the existing distribution, it is accepted and incorporated into the dataset; otherwise, it is rejected. To enhance this process, domain expertise can be integrated through a dataset knowledgeability exercise, allowing expert judgment to refine the dataset and establish a feedback loop. This hybrid approach contrasts with fully automated, knowledge-agnostic systems and is seen as a vital component in advancing toward Industry 5.0. By enforcing consistency in data distribution, the assumption of independently and identically distributed (IID) data is upheld—an essential condition for making reliable comparisons between models trained on distinct datasets.
USE CASE:
Composite materials have seen broad adoption across industries such as aerospace, automotive, and construction, thanks to their exceptional structural properties—namely, high stiffness-to-weight and strength-to-weight ratios—along with reduced maintenance needs and lower lifecycle costs. Despite these benefits, the production of composite components remains challenged by significant levels of uncertainty, both aleatoric (inherent variability) and epistemic (lack of knowledge), which hinder the consistent manufacture of high-quality parts.
In the aerospace sector, where structural reliability is paramount due to the potentially catastrophic consequences of failure, stringent qualification frameworks exist to ensure the integrity of materials, processes, and designs. While necessary for safety, these frameworks place considerable demands on manufacturers and often limit flexibility in production—particularly when it comes to cost optimization and real-time decision-making on the factory floor (Crawford et al., 2021a).
One example of these challenges can be seen in the use of “bus stop” autoclave cure cycles, where multiple parts, tools, and materials are batched together and cured simultaneously. This method offers practical advantages—such as reduced floor space requirements, more efficient use of capital equipment, and lower overhead—but also increases the complexity of planning and process control. In such settings, shop floor engineers and operators are often required to make in-situ decisions, relying on their expertise and tacit knowledge to maintain process conformance and part quality.
However, these decisions are typically unstructured and lack systematic optimization, revealing an opportunity for the integration of intelligent, technology-enabled decision-support systems. The success of such systems depends not only on access to historical process data, but also on the incorporation of expert insights—bridging human experience with data-driven methods to improve repeatability and efficiency in composite manufacturing.
In bus-stop autoclave curing runs for manufacturing composite aerospace structures, multiple small components—each with similar physical characteristics and qualified to undergo the same cure cycle—are stacked together in a single autoclave to improve production efficiency. Despite the shared cure cycle, each part must maintain a thermal history that stays within an acceptable thermal envelope to ensure product quality.
To achieve a successful cure across all components, parts are carefully selected based on physical similarities, such as laminate thickness, construction type (e.g., monolithic vs. sandwich panels), tooling material, and other relevant attributes. Two critical features derived from each part’s thermal profile—the peak exotherm (the maximum temperature reached during the cure) and the steady-state lag (the highest temperature differential between the part and the surrounding autoclave gas)—are used as indicators of process quality.
Lower values of these thermal metrics generally signify reduced process variability, which correlates with higher-quality outcomes. If these thresholds are exceeded, parts may develop defects like voids, ultimately compromising mechanical properties such as flexural strength, flexural modulus, and interlaminar shear strength. Thus, peak exotherm and steady-state lag serve as essential acceptance criteria for screening cured parts.
In the context of bus-stop autoclave cure cycles, interpretable surrogate models such as Logistic Rule Regression are integrated with expert knowledge through a fuzzy scoring system, enabling an assessment of dataset “knowledgeability” prior to the deployment of black-box models. To further support model validation, two metrics are introduced: specificity as a global confidence indicator, and the novel Decision Boundary Crispness Score (DBSC) as a local, sensitivity-based metric.
The modeling task at the factory level is framed as a binary classification problem, where the objective is to predict whether a carbon fiber prepreg part passes or fails the quality standards following autoclave curing. Specifically, the classification is based on two critical thermal processing criteria:
Pass if the peak exotherm temperature is less than 5 °C (Criterion 1)
Pass if the maximum lag temperature is less than 20 °C (Criterion 2)
Parts that meet both criteria are labeled as "pass" (class 1), while those that exceed either threshold are labeled as "fail" (class 0).
The model architecture, illustrated in the following figure, consists of two independent predictive models—one for each thermal outcome. This design allows for separate evaluation and parameter tuning tailored to each specific target, enabling more precise learning for each thermal metric.
Both models share an identical architecture, comprising five hidden layers with seven neurons per layer. This configuration was chosen through a parametric study, where it demonstrated the lowest error rate on the test dataset. Each neuron uses a sigmoid activation function, and model training is performed using the Adam optimizer with a binary cross-entropy loss function.
The results demonstrate that DBSC offers a more nuanced and conservative evaluation of both dataset and model quality, especially useful when dealing with uncertain or variable manufacturing conditions. The enhanced explainability and localized insight provided by these methods are particularly valuable to production engineers, supporting trust and accountability when using black-box models in high-stakes, real-time decision-making scenarios.
The fragmented and technically demanding nature of augmented reality (AR) content development continues to limit its broader adoption in industrial settings. Current AR authoring methods often require specialized knowledge in areas such as 3D modeling, programming, computer vision, tracking, and rendering. While automated tools attempt to streamline content creation, they are typically rigid and unsuitable for dynamic or undefined processes.
Many AR development approaches depend on existing resources like CAD models or PDF manuals. However, in cases where these materials are outdated, incomplete, or unavailable—such as with legacy equipment or obsolete electronics—content creation becomes a significant bottleneck.
Although model and symbol libraries are emerging to address this, they require constant updates and rely on fiducial markers for registration. This introduces further challenges, including the need for infrastructure changes (e.g., marker placement and visibility), time-consuming setup, and ongoing maintenance. Natural feature recognition can eliminate the need for markers but instead demands complete, high-quality 3D models and clear visibility of all object features, which is often impractical in industrial environments. Furthermore, while depth sensors and image capture can help generate 3D models without CAD data, these methods may interrupt workflows, require invasive equipment, and struggle to capture small or occluded components.
To address these challenges, AR content creation methods must evolve to accommodate non-experts, operate without dependencies on pre-existing resources or infrastructure modifications, and adapt to changes in processes or tasks. The proposed method introduces a low-disruption, template-based approach that bridges traditional and AR interfaces by capturing expert knowledge—both explicit and tacit—through eye-tracking and structured information mapping.
A Multi-Modal Framework for AR Content Creation and Delivery
The framework supports two user roles:
Content creators (e.g., trainers or experts), and
Content consumers (e.g., trainees or task novices).
It comprises five main steps:
Business Need Identification – Pinpoint a requirement for efficient knowledge transfer between experienced and inexperienced staff, particularly for hands-busy tasks.
Task Recording via Eye Tracking – The expert performs the task while wearing eye-tracking glasses, generating visual and audio data that are then broken down into clear training steps. These are populated into a Unity 3D-based AR content template.
Iterative Refinement – Content is refined through usability testing and feedback loops.
User Feedback Collection – Feedback during real-world use is captured to guide further optimization.
Knowledge Preservation and Empowerment – Organizations can retain critical knowledge through archiving and moderation, while users gain autonomy through self-paced learning.
(a) Task Recording & Information Mapping: The initial stage involves capturing the training task using eye-tracking technology. The recorded content is then structured through information mapping to define clear instructional steps. (b) Unity Development Environment: Unity is used as the core development platform, providing a visual interface for organizing and integrating multimedia elements into the AR training experience. (c) Augmented Repair Training Application Template with Mixed Reality Toolkit: The template, built on Unity and enhanced with the Mixed Reality Toolkit, offers a pre-configured structure for rapidly building AR training applications with intuitive interaction components. (d) Content Import: Trainers upload task-related media—such as video clips and images—into the template to visually support each instructional step. (e) Step Customization: Trainers define and organize the number of training steps, and for each one, they add supporting images, descriptive text, and other guidance to enhance user comprehension.
The training content (video, audio, gaze data) is captured directly during task execution. Eye-tracking glasses combine a front-facing video feed with a gaze fixation marker and integrated audio. This method enables the transfer of explicit procedural steps and tacit expertise, such as visual attention and workflow patterns. The resulting media is edited using standard software (e.g., Apple iMovie) and mapped into structured instructional steps.
Design of the AR Interface: Augmented Repair Training Application
The template organizes content using user-centric design principles. A numbered step menu allows users to track progress and navigate between instructions. Controls such as “home,” “back,” and “next” replicate traditional interface metaphors, easing the transition for new AR users. To enhance clarity, content is displayed with high-contrast text (white font with blue accents).
Instructional content is structured using information mapping, a method that includes:
Chunking related steps and concepts
Highlighting essential information
Ensuring consistency in formatting and structure
Integrating visuals effectively
Presenting content hierarchically for intuitive learning
Verification of Learning and Task Completion
Verification of knowledge transfer can be achieved through interactive assessments or expert review. While quizzes may suit well-defined tasks with quantifiable outcomes, they lack flexibility for variable, real-world processes. For this application, expert review was chosen for its simplicity and alignment with the goal of maintaining learner autonomy while ensuring training effectiveness.
USE CASE:
The AR training assistance method was implemented to create shop floor training content for a small enterprise. While participants expressed enthusiasm about the innovative approach, many were unfamiliar with the tools involved—specifically Unity3D, eye-tracking software, and the AR device itself. This unfamiliarity led to several usability challenges during the content editing and deployment stages. Technicians required support to navigate unexpected issues such as accidentally deleted scripts or misplaced files.
Despite these challenges, subjective feedback from stakeholders was highly positive. Senior management and the sales team viewed the AR training tool as a valuable and forward-thinking investment. When showcased at trade events and to existing clients, the tool generated strong interest and favorable responses. Shop floor staff were also enthusiastic about the interface, interpreting it as a sign that their company was embracing modern technologies and investing in employee development. However, a small number of staff expressed discomfort using certain features of the AR device, such as voice commands and air tap gestures.
In response to the challenges identified during deployment, it became evident that a more accessible content creation interface would improve usability and reduce technical barriers. To this end, the development of an "intermediary interface" is proposed for future work. This interface would offer a form-based editing environment, simplifying the interaction between non-expert users and the underlying AR content formatting system.
Key features of the proposed intermediary interface include:
Form-style input fields for guided data entry (e.g., text, video, audio annotations)
Built-in tools to edit and align eye-tracking outputs into AR-friendly formats
Tooltips and help documentation embedded within each section to guide users step-by-step
Automated conversion of form inputs into AR code via a secure back-end service
Responsive layout using a bootstrap-style grid system to accommodate various AR form factors
The Integrated Development Environment (IDE) would be decoupled from the front end and managed in the back end, ensuring secure storage, script execution, and formatting integrity. Additional layers of security and content moderation can be introduced to prevent accidental corruption, unauthorized access, or unapproved deployments.
This redesigned workflow aims to empower non-technical users to contribute to AR content creation while preserving the integrity, flexibility, and scalability of the training system.
In a recently published study, we examine the impact of communication network characteristics on project performance, measured in terms of the number of issues closed within open-source software development projects. We also examine how this project outcome is affected by project managers' active participation in these communication networks. The results obtained from analyzing 120,243 observations of unbalanced dynamic panel show that with higher team interaction the technical problem-solving capability of team initially increases but then decreases. We find that project manager's participation in team discussions increases its problem-solving capability. That is, there is a moderating effect of manager participation in flattening the curvilinear relationship between team interaction and technical problem-solving performance. In essence, project managers' active participation alleviates the reduction in weekly issue closure rate once the density of team interaction goes beyond the inflection point.
The findings have implications for project managers and developers working on OSS projects. Our research provides recommendations on the attributes of the project manager that would be best suited for improving project team performance and sheds light on aspects of team communication that a project manager may need to manage. The project manager should promote team interaction to enable the flow of ideas and information to improve the issue resolution rate of the team. However, the project manager must remain aware of the growing fatigue from information overload in teams. Our research provides evidence for the inflection point beyond which such fatigue might start to impede project performance. We identify the specific time when the project manager should intervene to reduce the negative effects of information overload. Such an understanding enables project managers to make the best use of the project management hours and add value.
As issue closure is often required for bug fixes in new product development, our study shows that overall product development will substantially improve at high team interaction by employing a manager who actively engages in team interactions, enabling faster product development to the market cycle. As team member interaction intensity can seldom be controlled, our study provides guidance how expensive project management resources can be allocated depending on the extent of team interaction to shorten the new product development cycle.
The findings of this research also have implications for how OSS platforms can be made more effective in managing software development activities. Since team performance decreases with additional team interactions after an inflection point, OSS platforms could consider developing a system that categorizes communication notifications that a team member receives (such as a thread or message broadcast) depending on the priority level. For example, suppose a software project on GitHub has several modules based on which developers are grouped. Developers will prefer to stay updated on the messages from their colleagues working on the same inflection point identified in this study could serve as a reference point for the platform when project performance is impacted by information overload. The system developed by the OSS platform could get triggered at this reference point so that communication notification to team members can be prioritized based on the module that a developer is working on. In turn, developers can focus on a few important messages to enhance team performance.
The OSS platform may also aid project managers to participate more in team activities as our results suggest that active participation of project managers enhances team performance by flattening the curve. For instance, the platform may provide project managers with a dashboard that summarizes project progress. Such a service may enable the manager to get a quick and accurate update of the team and may help the manager to participate in a more meaningful way. The platform may also provide a metric that measures the relevance of communication by developers, enabling managers to efficiently streamline communication within their teams.