Friday, November 28, 2025

According to a recent letter from Stephen Feinberg (U.S. Deputy Defense Secretary) to U.S. lawmakers — dated October 7, 2025 — the Pentagon concluded that Alibaba, Baidu and BYD “should be added” to the so-called Section 1260H list.

 

               

What the Pentagon is proposing

  • According to a recent letter from Stephen Feinberg (U.S. Deputy Defense Secretary) to U.S. lawmakers — dated October 7, 2025 — the Pentagon concluded that Alibaba, Baidu and BYD “should be added” to the so-called Section 1260H list. 

This list targets Chinese companies “deemed to aid China’s military” while operating in or having ties with U.S. entities. 

  • Alongside those three firms, five other Chinese companies were also flagged for possible inclusion: Eoptolink Technology Inc., Hua Hong Semiconductor Ltd., RoboSense Technology Co., WuXi AppTec Co., and Zhongji Innolight Co.

  • The letter reportedly arrived just weeks before a broader “trade truce” agreement between Donald Trump and Xi Jinping — suggesting this is a carefully timed warning. 


What the list means — and what it doesn’t

  • Inclusion on the 1260H list does not automatically impose sanctions or “bans.”

  • But it serves as a warning signal — a reputational risk — especially for U.S. companies, investors, or partners dealing with these firms. 

  • For the companies: it could limit or complicate their ability to form partnerships with U.S. firms or get investment/credits from U.S. entities — even if no immediate legal/contractual restrictions apply. 

  • For markets and investors: such designations tend to shake confidence, often leading to stock drops (as already seen with recent share price reactions).


 Responses from the companies and reactions so far

  • Alibaba denied the allegations. It said there “is no basis” to put it on the 1260H list, emphasising that it is not a “Chinese military company” and doesn’t engage in U.S. military-related procurement. 

  • Baidu also rejected the suggestion: calling the claim “entirely baseless,” saying its products and services are for civilian use, and asserting that no evidence has been presented to justify the inclusion. 

  • For BYD, as well as the five other companies, there’s (as yet) no detailed public response, according to the latest reporting. 

  • In China, government and media sources have sharply criticised the proposal, calling it an example of what they see as “politicization” of trade and technology — warning it may undermine global supply-chains and fair competition. 


๐ŸŒ Broader Context — Why the U.S. is doing this now

  • This move is part of a broader U.S. strategy to scrutinise Chinese tech, auto, EV, and semiconductor firms — sectors that now overlap heavily with both civilian economy and potential military-industrial applications. 

  • The timing — just before a trade-truce deal — suggests Washington may be using such designations as leverage, or at least to maintain pressure even as tariff tensions ease. 

  • The move reflects growing concern over “military-civil fusion” — the idea that civilian firms and technologies in China may be used, directly or indirectly, to support defence objectives. 


What to watch next

  • It remains unclear whether Alibaba, Baidu or BYD have been formally added to the 1260H list yet.

  • If they are formally listed, watch for possible ripple effects: from reduced foreign investment to pressure on U.S. partners & supply chains working with these firms.

  • How other governments — including in Europe or Asia — respond: could influence whether this remains a U.S.-centric signal or becomes a broader push against Chinese firms globally.

  • Whether the companies choose legal or diplomatic strategies to contest or rebut the listing. 

  • Complied by   

  • aqsa mahak (financial analyst)

Saturday, November 22, 2025

Meta’s WorldGen — Generative AI for Interactive 3D Worlds

 

Meta’s WorldGen — Generative AI for Interactive 3D Worlds

Introduction

Meta has just unveiled WorldGen, a cutting-edge generative AI system that can turn a single text prompt into a fully interactive, navigable 3D world. This isn’t just about creating pretty 3D scenes — WorldGen builds real structure, walkable areas, and engine-ready assets 


Why WorldGen Is a Big Deal

  1. From Static to Interactive

    • Unlike many 3D generative models that prioritize visual fidelity (e.g., Gaussian splatting), WorldGen emphasizes functionality. It creates a navigation mesh (navmesh) to define walkable surfaces.

    • This means the generated world is not just for show — characters or agents could realistically walk through it.

  2. Seamless Integration with Game Engines

    • The 3D meshes generated by WorldGen are exportable to Unity and Unreal Engine

    • This makes it practical for game developers, simulation creators, and enterprise users to plug this into existing workflows. 

  3. Editable Modular Worlds

    • WorldGen breaks scenes into parts. So, once a world is generated, designers can tweak, remove, or move individual objects. 

    • This modularity prevents the “one big blob” problem and gives creators real control.

  4. Fast Generation

    • The system can generate a traversable world in about five minutes from a single text prompt. 

    • This speed dramatically reduces the time and effort needed compared to manual 3D world building.


How WorldGen Works — The Pipeline

Meta describes WorldGen’s architecture as four stages:

  1. Scene Planning

    • A large language model (LLM) interprets the text prompt and plans a spatial layout.

    • It decides where objects might go, how terrain should be structured, etc.

  2. Scene Reconstruction

    • Rough geometry is generated, conditioned on the navigation mesh.

    • This ensures the world is not only visually coherent but physically navigable.

  3. Scene Decomposition

    • Objects are broken down into parts (buildings, trees, rocks, etc.).

    • This decomposition enables editing and reusability.

  4. Scene Enhancement

    • Final pass to refine textures, improve geometry, and polish visuals.

    • The output is more detailed and “cleaner” than the initial blockout.


Applications & Implications

  • Gaming & Metaverse
    WorldGen could radically speed up level design, prototyping, and content creation for game developers.

  • Enterprise Simulations
    Use-cases include digital twins, training simulations (e.g., factory floor, safety drills), and architectural visualizations.

  • AI Agents / Embodied AI
    Since the worlds are traversable, they can serve as realistic training environments for AI agents (robots, virtual characters).

  • Creative Tool for Designers
    Designers and creators (even non-3D experts) could easily whip up immersive worlds just by writing prompts.


Limitations & Challenges

  • Research Phase: WorldGen is currently research-grade, not a fully released production product. 

  • Compute Cost: Generating interactive 3D worlds will likely be resource-intensive (GPU / cloud costs).

  • Quality vs Scale: A 5-minute generation is impressive, but there may be tradeoffs in how big or detailed the world can get.

  • Editing Complexity: Even though objects are modular, designers might still need manual fine-tuning for very specific or complex scenes.

  • Ethical / Safety Considerations: Generated interactive worlds could simulate sensitive or dangerous scenarios; proper governance might be needed.


Meta’s Broader GenAI Strategy

  • Alongside WorldGen, Meta is also pushing AssetGen, which is specifically for generating 3D assets (meshes + textures) using generative AI. 

  • For its Horizon (Worlds) creators, Meta has already released GenAI tools for mesh generation, texture creation, audio, and even code/script generation.

  • This shows Meta’s long-term commitment to generative AI + spatial computing: making it easier for creators to build in VR, AR, and metaverse environments.


Conclusion

WorldGen is a significant leap forward in generative AI — not just for creating 3D art, but for building interactive, functional worlds. By combining structural reasoning (like navmesh) with modular design and game-engine compatibility, Meta is laying the foundation for a future where building immersive worlds might be as simple as writing a sentence.

If Meta scales this up, it could drastically lower the barrier for 3D world creation, enabling more creators, smaller teams, and non-experts to build rich virtual environments.


compllied by 

aqsa mahak (financial analyst)

Friday, November 14, 2025

 

AI for Finance & Stock Market

 How AI is Revolutionizing Finance and the Stock Market

The world of finance, traditionally driven by human expertise, intuition, and complex mathematical models, is undergoing a profound transformation. The catalyst? Artificial Intelligence (AI). From predicting market trends to automating trading and personalizing financial advice, AI is not just a buzzword; it's a powerful force reshaping how we interact with money and investments.



Artificial intelligence (AI) in finance helps drive insights for data analytics, performance measurement, predictions and forecasting, real-time calculations, customer servicing, intelligent data retrieval, and more. It is a set of technologies that enables financial services organizations to better understand markets and customers, analyze and learn from digital journeys, and engage in a way that mimics human intelligence and interactions at scale

How is AI used in finance?
AI in finance can help in five general areas: personalize services and products, create opportunities, manage risk and fraud, enable transparency and compliance, and automate operations and reduce costs.

What is ML in finance?
Machine learning (ML) is a  subset of ai that enables a system to autonomously learn and improve using neural networks and deep learning, without being explicitly programmed, by feeding it large amounts of data. It allows financial institutions to use the data to train models to solve specific problems with ML algorithms – and provide insights on how to improve them over time.
Benefits of AI in Finance:-

Automation
AI can help automate workflows and processes, work autonomously and responsibly, and empower decision making and service delivery. For example, AI can help a payments provider automate aspects of cybersecurity by continuously monitoring and analyzing network traffic. Or, it may enhance a bank’s client-first approach with more flexible, personalized digital banking experiences that meet client needs faster and more securely.

Accuracy
AI can help financial services organizations control manual errors in data processing, analytics, document processing and onboarding, customer interactions, and other tasks through automation and algorithms that follow the same processes every single time.

Efficiency
When AI is used to perform repetitive tasks, people are free to focus on more strategic activities. AI can be used to automate processes like verifying or summarizing documents, transcribing phone calls, or answering customer questions like “what time do you close?” AI bots are often used to perform routine or low-touch tasks in the place of a human.

Speed
AI can process more information more quickly than a human, and find patterns and discover relationships in data that a human may miss. That means faster insights to drive decision making, trading communications, risk modeling, compliance management, and more.

Availability
With AI, you can help your customers complete financial tasks, find solutions to meet their goals, and manage and control their finances whenever and where they are. When running in the cloud, AI and ML can continuously work on its assigned activities.
Innovation
The ability to analyze vast amounts of data quickly can lead to unique and innovative product and service offerings that leapfrog the competition. For instance, AI has been used in predictive analytics to modernize insurance customer experiences without losing the human touch.

The future of AI in financial services
AI will help drive financial services growth. Many organizations have gone digital and learned new ways to sell, add efficiencies, and focus on their data. Going forward, they will need to personalize relationship-based customer engagement at scale. AI plays a key role in helping drive tailored customer responses, make safer and more accountable product and service recommendations, and earn trust by broadening concierge services that are available when customers need them the most.
 
In addition, financial institutions will need to build strong and unique permission-based digital customer profiles; however, the data they need may exist in silos. By breaking down these silos, applying an AI layer, and leveraging human engagement in a seamless way, financial institutions can create experiences that address the unique needs of their customers while scaling efficiently.

How Algorithms Are Driving Modern Stock Trading

The floor of the stock exchange used to be a bustling, chaotic place dominated by shouting human traders. Today, the real action is happening in data centers, where Artificial Intelligence (AI) and sophisticated algorithms are executing trades, analyzing market patterns, and making investment decisions in the blink of an eye.

AI isn't just a peripheral tool; it's the central engine powering the efficiency and speed of the modern stock market.

The Four Pillars of AI in Stock Trading

AI technology, especially Machine Learning (ML) and Deep Learning (DL), is employed across the entire investment lifecycle. Here are the four most significant applications:

1. Algorithmic Trading (High-Frequency Trading)

This is perhaps the most direct and visible impact of AI. AI-powered algorithms execute large numbers of orders at extremely high speeds.

Speed is Key: AI can analyze market data and execute a trade in microseconds, allowing firms to capitalize on fleeting price discrepancies that a human trader would never even perceive

Arbitrage Opportunities: Algorithms constantly monitor multiple exchanges to find and exploit small price differences for the same asset.

2. Predictive Analytics and Forecasting

AI models are trained on massive datasets—decades of stock prices, economic indicators, commodity prices, and more—to identify patterns invisible to traditional methods

Pattern Recognition: ML algorithms can detect complex, non-linear relationships between variables to generate highly probable future price movements, although a 100% accurate forecast remains impossible

Time Series Analysis: Deep learning networks, like Recurrent Neural Networks (RNNs), are particularly effective at analyzing time-based data to forecast short-term and long-term trends

3. Sentiment Analysi
The market is driven by sentiment and news. AI excels at quantifying this human element, turning unstructured text into tradable signals.

News & Social Media: 

AI scrapes thousands of news articles, earnings reports, regulatory filings, and social media posts (like Twitter/X) in real-time.

Emotional Score: 

it uses Natural Language Processing (NLP) to determine the prevailing mood (positive, negative, or neutral) around a specific company or the market as a whole, providing a critical input for trading decisions


Risk Management and Portfolio Optimization:

AI helps institutions and retail investors alike manage the inherent risks of the market

Dynamic Risk Modeling: 

AI constantly assesses portfolio volatility and correlation across assets, automatically suggesting adjustments or even executing trades to rebalance the portfolio based on predefined risk tolerance.

Stress Testing: 

Complex AI simulations can test how a portfolio would perform under extreme, hypothetical market conditions (like a sudden economic crash) much faster and more comprehensively than human analysts.

Why AI Wins: Speed, Scale, and Objectivity

The rise of AI in the stock market boils down to three core competitive advantages:

Massive Scale: AI can monitor thousands of stocks, global markets, and continuous news feeds simultaneously—a task impossible for any human team

Incredible Speed: Decisions and execution happen instantaneously, giving AI-driven strategies a crucial edge in volatile markets.

Unbiased Decisions: AI operates purely on data and logic, eliminating the cognitive biases (fear, greed, panic) that often lead to poor decision-making by human traders.

The Future is a Human-AI Partnership

While algorithms are taking over much of the execution and data analysis, the human element remains vital. Investment firms rely on human strategists to

Define the Strategy: Setting the initial rules, objectives, and ethical constraints for the AI model.

Interpret the "Why": Understanding the fundamental economic or geopolitical reasons behind an AI-detected pattern.

The ultimate future of the stock market isn't AI versus humans, but a powerful synergy where human intuition and strategy are amplified by the data processing and speed of AI.

Interested in learning more about how AI models are built?

I can provide an overview of the machine learning techniques (like supervised vs. reinforcement learning) used to train stock market prediction bots.

Complied By
Aqsa Mahak (financial analyst)
)

Comments

Popular posts from this blog

Artificial Intelligence: The New Green Revolution in Agriculture

Image

Smarter Healthcare :How Saves Lives Every Day

Image

เคธ्เคฎाเคฐ्เคŸ เคŸ्เคฐेเคกिंเค— เค•ी เค•्เคฐांเคคि: AI เค•ैเคธे เคธเคฎเค เคฐเคนा เคนै เคฎाเคฐ्เค•ेเคŸ เค•ी เคšाเคฒ

Image

Wednesday, October 15, 2025

OpenAI’s infrastructure moves: Broadcom and AMD partnerships

AMD partnership — securing GPU capacity

  • OpenAI and AMD recently announced a 6 gigawatt multi‑generation compute agreement. 

    • The first 1 GW deployment is expected in H2 2026, using the AMD Instinct MI450 GPUs.

    • To align incentives, AMD granted OpenAI warrants to acquire up to 160 million shares, vesting as deployments scale. 

    • This deepens their hardware‑software alignment and gives OpenAI more predictability over supply.

  • The move also signals a shift away from exclusive dependence on one vendor (e.g. Nvidia) and adds resilience to OpenAI’s supply chain.

  • That said, execution risk is nontrivial (manufacturing yields, integration, cooling/power, orchestration across heterogeneous hardware) — it’s a big bet, not a guarantee.

Broadcom partnership — custom AI accelerators

  • OpenAI also announced a strategic collaboration with Broadcom to co‑develop 10 gigawatts of custom AI accelerators + network systems.

    • OpenAI will handle the design side (model‑aware optimizations, embedding their algorithmic insights), while Broadcom will build and deploy the systems. 

    • The rollouts are scheduled from H2 2026 through end of 2029. 

    • The systems will include Broadcom’s Ethernet, PCIe, optical interconnects to support scale-out and scale-up networking.

  • The rationale: by co‑designing hardware that’s tightly aligned with their models and workloads, OpenAI can extract performance, latency, and power gains that are hard to get from commodity chips. Also, it gives them more control over cost structure, supply chain, and differentiation.

  • It’s also a hedge: if GPU vendors become constrained (due to demand, export controls, etc.), having their own custom “accelerator fleet” gives OpenAI more autonomy.

Strategic considerations & risks

  • Heterogeneity & software abstraction: Running efficiently across multiple hardware types (AMD GPUs, Broadcom accelerators, maybe third‑party hardware) demands robust abstraction layers, compilers, memory & interconnect strategies.

  • Scale & operations: Deploying on the GW scale means huge demands on power, cooling, reliability, and infrastructure management.

  • Lock-in vs openness: The more custom the hardware, the harder it is for others to replicate (good for defensibility), but also harder to maintain ecosystem compatibility and standard tooling.

  • Technology risk & obsolescence: By the time some of these systems come online (2028–2029), model architectures might shift; hardware must be adaptable.

Overall, this combination of a major GPU commitment and in-house custom accelerator design signals that OpenAI is doubling down on owning the full stack — from model to hardware — to sustain performance leadership and margin control.



Codex, ChatGPT apps, and the evolving AI agent ecosystem

Codex becomes Generally Available (GA)

  • OpenAI announced that Codex is now generally available (no longer just a preview).

    • New features include a Slack integration (you can @Codex in a Slack thread, and it will gather context and respond) 

    • They also released a Codex SDK, enabling embedding the Codex agent capabilities into your own tools and applications. 

    • For workspace admins, OpenAI added environment controls, monitoring, and analytics features to manage usage at scale.

    • The underlying model is GPT‑5‑Codex, a version of GPT‑5 optimized for agentic coding. 

  • In the preview period, Codex’s usage grew rapidly (10× in 2 months).

  • The adoption inside OpenAI was also strong: nearly all internal engineers are now using it, merging ~70% more pull requests weekly than earlier. 

This is a shift: Codex is no longer a side experiment — it’s entering “production” status, with tooling, access, analytics, and embedding capabilities.

Apps in ChatGPT + Apps SDK

  • OpenAI also introduced “apps in ChatGPT” along with an Apps SDK (preview). OpenAI

    • Developers can build interactive apps that live inside ChatGPT — so when a user is conversing, an app might “activate” contextually to help. 

    • Examples: you might ask ChatGPT to design a poster (via Canva), then follow up with creating a pitch deck. The apps handle the specialized UI. 

    • Apps respond to natural language and include interactive UI elements in the chat context. 

    • Privacy & permission controls are built in: when an app is first connected, ChatGPT prompts you on what data is shared. 

    • The roadmap is to support apps in ChatGPT Business, Enterprise, and Edu, and open a directory for users to discover apps. 

  • This effectively turns ChatGPT from “just a chatbot” into a platform, a container for third-party services and UIs — much like how smartphone OSs host apps.

  • For users, this means more seamless workflows: you don’t need to exit the chat to call specialized services. For developers, it's a new channel for embedding capabilities directly into conversations.

  • This also reflects a broader trend: “agentic” systems — AI that can call tools, fetch data, execute code — are becoming foundational, not edge features.

Other changes & enablers

  • Under the hood, updates to Codex include faster, more reliable performance, better real-time collaboration, and integration across terminal, IDE, web, and even phones. Codex’s unified experience lets you hand off between local editing and cloud tasks without losing context. 

  • Importantly, Codex cloud tasks will soon count against your usage quota (from Oct 20 onward). 

Takeaway: OpenAI is pushing hard to position Codex not as a toy for demos, but as a full-fledged “coding coworker” — and to make ChatGPT itself a canvas for interacting with domain-specific services (apps)


Google’s advances: Veo 3 and Gemini Robotics / Embodied AI

Google’s video tools: Veo 3 / Flow updates

  • Google is enhancing its AI-powered video editing/generation tool (Flow / Veo 3). 

    • In the 3.1 update, users can modify lighting and shadows in generated video, making the output more realistic. 

    • New “Ingredients to Video” allows video + audio generation from three reference images. 

    • “Frames to Video” can transition between a start and end image, generating intermediate frames + sound. 

    • “Scene Extension” lets you expand a video clip by up to 1 minute with AI-generated visuals and audio. 

    • Object removal is coming: remove an item, and the system reconstructs the scene to eliminate traces. 

    • Veo 3.1 is currently in “paid preview” via the Gemini API. 

  • These features push AI video closer to full creative editing — not just content generation but tool-like capabilities to refine, extend, and manipulate visuals.

  • This is part of a broader push (across Google and rivals) to integrate generative video into everyday creation flows (production, marketing, creative content).

Gemini Robotics / Embodied AI

  • Google DeepMind introduced Gemini Robotics, a family of AI models that bridge the gap between digital reasoning and physical action. 

    • The models are built on top of Gemini 2.0 and include Vision-Language-Action (VLA) capabilities. 

    • They can process images, video, and natural language prompts to generate actions for robots — e.g. manipulation, object interaction, movement in dynamic environments. 

    • The Gemini Robotics‑ER (Embodied Reasoning) variant extends multimodal reasoning into 3D spatial and temporal understanding: bounding boxes, trajectory planning, multi-view correspondences. 

    • The Gemini Robotics‑ER 1.5 API lets robots “understand” a scene: detect objects, plan sub‑actions, and execute them by invoking functions or code.

  • The ambition: robots that can interpret natural language commands such as “put the apple in the bowl,” break them down into sub-actions (pick, move, place), and carry them out in real environments.

  • The shift is toward generalist robotics — not rigid, narrowly programmed systems, but embodied agents that integrate perception, reasoning, and action.

  • Challenges remain: robustness in unstructured environments, safety, long-horizon planning, real-time adaptation, hardware constraints (sensing, actuation), and sim-to-real transfer.


Broader trends: AI in science, enterprise uptake, etc.

AI in scientific discovery

  • Research continues to push AI toward assisting in advanced scientific tasks: hypothesis generation, data interpretation, experiment planning, and cross-domain integration.

  • For example, a new benchmark SciVideoBench aims to assess video reasoning in scientific domains, where models must interpret experimental videos and answer domain-specific questions.

    • Early evaluations show current state-of-the-art models (even proprietary ones) struggle significantly in these tasks, indicating much headroom. 

  • Also, embodied AI models (like Gemini Robotics) and multimodal video models (e.g. “AlanaVLM” for egocentric video reasoning) are beginning to bridge perception, action, and scientific contexts. 

  • The interplay between AI and science is accelerating: AI tools help interpret high-throughput experiments (e.g., in genomics, high-energy physics, material science) more rapidly, enabling new hypotheses and insights.

  • But caution: scientific domains require strong rigor, domain validity, explainability, and safety. The “hallucination” or overconfidence of AI remains a risk. The models must be grounded, verifiable, and interpretable.

Enterprise AI adoption and momentum

  • Across industries, enterprises are increasingly embedding AI models into workflows, products, and decision systems — not just as experimental pilots, but as core infrastructure.

  • Key enablers:

    • Scalable infrastructure: the compute deals (AMD, Broadcom) and cloud expansion make it more feasible to host large models in enterprise settings reliably.

    • Tooling & integration: SDKs, admin controls, embedding agents (Codex, apps in ChatGPT) reduce the friction of integrating AI into existing systems.

    • Vendor partnerships: e.g. OpenAI’s expanded alliance with Salesforce to integrate ChatGPT + their models into the Salesforce ecosystem. 

    • New business models: AI-powered assistants, agentic tooling, “AI as a service” embedded in traditional products.

  • But there are still obstacles: data privacy, interpretability, alignment with domain constraints, cost (compute and licensing), regulatory & compliance issues, and workforce adaptation.

  • The shifts in infrastructure also raise barriers to entry: smaller teams or companies may struggle to compete if they can’t access large-scale compute, custom hardware, or high-fidelity agents.


Synthesis and forward look

Putting all of this together, here’s how I see the trajectory:

  1. Full-stack control is becoming essential

    • It’s no longer sufficient to rely purely on third-party GPUs; leading AI players are working to own every layer (model, hardware, orchestration). The OpenAI + AMD + Broadcom moves are emblematic of that.

  2. ChatGPT evolves into a platform & agent hub

    • With apps inside ChatGPT and embedding SDKs, the “chat interface” becomes the universal shell. You won’t need to leave the conversation to call services — they’ll be embedded and context-aware.

  3. Agentic AI is entering the mainstream

    • Tools like Codex, AgentKit (drag‑and-drop agents), app SDKs, robotics agents (Gemini Robotics) all point to a world where AI systems will actively carry out tasks for users, rather than just respond.

  4. Perception → reasoning → action bridging

    • The push into embodied AI and robotics (e.g. Gemini Robotics) shows the ambition to connect “thinking AI” with the physical world, closing the loop between perception, planning, and execution.

  5. Scientific and industrial AI rise

    • As models mature, more of their impact will come in serious domains: scientific discovery, industrial automation, healthcare, logistics, etc. The barrier is less about “can AI do this” and more about “can it do it reliably, verifiably, safely.”

  6. Consolidation and scale economics

    • The compute and infrastructure demands will favor large players. The cost of entry is rising. Smaller entities will need to differentiate via niche specialization, algorithmic innovation, or partnering.

  7. Governance, safety, and alignment will increasingly matter

    • As AI agents act in the world and have more autonomy, issues of safety, alignment, auditability, verification, adversarial robustness, and control are becoming urgent, not academic. 

    • complied by 

    • aqsa mahak( financial analyst)




According to a recent letter from Stephen Feinberg (U.S. Deputy Defense Secretary) to U.S. lawmakers — dated October 7, 2025 — the Pentagon concluded that Alibaba, Baidu and BYD “should be added” to the so-called Section 1260H list.

                   What the Pentagon is proposing According to a recent letter from Stephen Feinberg (U.S. Deputy Defense Secretary) to U...