<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:media="http://search.yahoo.com/mrss/" xmlns:content="http://purl.org/rss/1.0/modules/content/" version="2.0">
  <channel>
    <title>Aaj TV English News - Technology</title>
    <link>https://english.aaj.tv/</link>
    <description>Aaj TV English</description>
    <language>en-Us</language>
    <copyright>Copyright 2026</copyright>
    <pubDate>Tue, 05 May 2026 20:47:50 +0500</pubDate>
    <lastBuildDate>Tue, 05 May 2026 20:47:50 +0500</lastBuildDate>
    <ttl>60</ttl>
    <item xmlns:default="http://purl.org/rss/1.0/modules/content/">
      <title>Microsoft, Google and xAI to give US government early access to AI models for security checks</title>
      <link>https://english.aaj.tv/news/330458308/microsoft-google-and-xai-to-give-us-government-early-access-to-ai-models-for-security-checks</link>
      <description>&lt;p&gt;&lt;strong&gt;Microsoft, Alphabet-owned Google and Elon Musk’s xAI will give the US government early access to new artificial intelligence ​models before their public release to allow checks for national security risks under ‌a new deal.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The Centre for AI Standards and Innovation at the Department of Commerce said on Tuesday that the agreement would allow it to evaluate the models before deployment and conduct research ​to assess their capabilities and security risks.&lt;/p&gt;
&lt;p&gt;The agreement underscores growing concern in Washington ​over the national security risks posed by powerful AI systems. By ⁠securing early access to frontier models, US officials are aiming to identify threats ranging ​from cyberattacks to military misuse before the tools are widely deployed.&lt;/p&gt;
&lt;p&gt;The development of advanced AI ​systems, including Anthropic’s Mythos, has in recent weeks created a stir globally, including among US officials and corporate America, over their ability to supercharge hackers.&lt;/p&gt;
&lt;p&gt;“Independent, rigorous measurement science is essential to understanding frontier AI ​and its national security implications,” CAISI Director Chris Fall said in a statement.&lt;/p&gt;
&lt;p&gt;The move ​builds on agreements with OpenAI and Anthropic, established in 2024 under the Biden administration when CAISI ‌was known ⁠as the US Artificial Intelligence Safety Institute.&lt;/p&gt;
&lt;p&gt;CAISI, which serves as the government’s main hub for AI model testing, said it had already completed more than 40 evaluations, including on cutting-edge models not yet available to the public.&lt;/p&gt;
&lt;p&gt;Developers frequently hand over versions of their ​models with safety guardrails ​stripped back so ⁠the centre can probe for national security risks, the agency said.&lt;/p&gt;
&lt;p&gt;Microsoft and xAI did not immediately respond to requests for comment. Google ​declined to comment.&lt;/p&gt;
&lt;p&gt;Last week, the Pentagon said it had reached agreements ​with seven ⁠AI companies to deploy their advanced capabilities on the Defence Department’s classified networks as it seeks to broaden the range of AI providers working across the military.&lt;/p&gt;
&lt;p&gt;The Pentagon announcement did ⁠not ​include Anthropic, which has been embroiled in a dispute with the Pentagon over guardrails on the military’s use of its AI tools.&lt;/p&gt;
</description>
      <content:encoded xmlns="http://purl.org/rss/1.0/modules/content/"><![CDATA[<p><strong>Microsoft, Alphabet-owned Google and Elon Musk’s xAI will give the US government early access to new artificial intelligence ​models before their public release to allow checks for national security risks under ‌a new deal.</strong></p>
<p>The Centre for AI Standards and Innovation at the Department of Commerce said on Tuesday that the agreement would allow it to evaluate the models before deployment and conduct research ​to assess their capabilities and security risks.</p>
<p>The agreement underscores growing concern in Washington ​over the national security risks posed by powerful AI systems. By ⁠securing early access to frontier models, US officials are aiming to identify threats ranging ​from cyberattacks to military misuse before the tools are widely deployed.</p>
<p>The development of advanced AI ​systems, including Anthropic’s Mythos, has in recent weeks created a stir globally, including among US officials and corporate America, over their ability to supercharge hackers.</p>
<p>“Independent, rigorous measurement science is essential to understanding frontier AI ​and its national security implications,” CAISI Director Chris Fall said in a statement.</p>
<p>The move ​builds on agreements with OpenAI and Anthropic, established in 2024 under the Biden administration when CAISI ‌was known ⁠as the US Artificial Intelligence Safety Institute.</p>
<p>CAISI, which serves as the government’s main hub for AI model testing, said it had already completed more than 40 evaluations, including on cutting-edge models not yet available to the public.</p>
<p>Developers frequently hand over versions of their ​models with safety guardrails ​stripped back so ⁠the centre can probe for national security risks, the agency said.</p>
<p>Microsoft and xAI did not immediately respond to requests for comment. Google ​declined to comment.</p>
<p>Last week, the Pentagon said it had reached agreements ​with seven ⁠AI companies to deploy their advanced capabilities on the Defence Department’s classified networks as it seeks to broaden the range of AI providers working across the military.</p>
<p>The Pentagon announcement did ⁠not ​include Anthropic, which has been embroiled in a dispute with the Pentagon over guardrails on the military’s use of its AI tools.</p>
]]></content:encoded>
      <category>Technology</category>
      <guid>https://english.aaj.tv/news/330458308</guid>
      <pubDate>Tue, 05 May 2026 19:45:48 +0500</pubDate>
      <author>none@none.com (Reuters)</author>
      <media:content url="https://i.aaj.tv/large/2026/05/05193334d464e2a.webp" type="image/webp" medium="image" height="428" width="640">
        <media:thumbnail url="https://i.aaj.tv/thumbnail/2026/05/05193334d464e2a.webp"/>
        <media:title>A representational image. -- Reuters</media:title>
      </media:content>
    </item>
  </channel>
</rss>
