Page 626«..1020..625626627628..640650..»

A new Pentagon program aims to speed up decisions on what AI tech is trustworthy enough to deploy – The Associated Press

NATIONAL HARBOR, Md. (AP) Artificial intelligence employed by the U.S. military has piloted pint-sized surveillance drones in special operations forces missions and helped Ukraine in its war against Russia. It tracks soldiers fitness, predicts when Air Force planes need maintenance and helps keep tabs on rivals in space.

Now, the Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China. The ambitious initiative dubbed Replicator seeks to galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap, and many, Deputy Secretary of Defense Kathleen Hicks said in August.

While its funding is uncertain and details vague, Replicator is expected to accelerate hard decisions on what AI tech is mature and trustworthy enough to deploy - including on weaponized systems.

There is little dispute among scientists, industry experts and Pentagon officials that the U.S. will within the next few years have fully autonomous lethal weapons. And though officials insist humans will always be in control, experts say advances in data-processing speed and machine-to-machine communications will inevitably relegate people to supervisory roles.

Thats especially true if, as expected, lethal weapons are deployed en masse in drone swarms. Many countries are working on them and neither China, Russia, Iran, India or Pakistan have signed a U.S.-initiated pledge to use military AI responsibly.

The Longshot, an air-launched unmanned aircraft that General Atomics is developing with the Defense Advanced Research Project Agency for use in tandem with piloted Air Force jets, is displayed at the Air & Space Forces Association Air, Space & Cyber Conference, Wednesday, Sept. 13, 2023 in Oxon Hill, Md. Pentagon planners envision using such drones in human-machine teaming to overwhelm an adversary. But to be fielded, developers will need to prove the AI tech is reliable and trustworthy enough. (AP Photo/Alex Brandon)

Its unclear if the Pentagon is currently formally assessing any fully autonomous lethal weapons system for deployment, as required by a 2012 directive. A Pentagon spokeswoman would not say.

Replicator highlights immense technological and personnel challenges for Pentagon procurement and development as the AI revolution promises to transform how wars are fought.

The Department of Defense is struggling to adopt the AI developments from the last machine-learning breakthrough, said Gregory Allen, a former top Pentagon AI official now at the Center for Strategic and International Studies think tank.

The Pentagons portfolio boasts more than 800 AI-related unclassified projects, much still in testing. Typically, machine-learning and neural networks are helping humans gain insights and create efficiencies.

The AI that weve got in the Department of Defense right now is heavily leveraged and augments people, said Missy Cummings, director of George Mason Universitys robotics center and a former Navy fighter pilot. Theres no AI running around on its own. People are using it to try to understand the fog of war better.

One domain where AI-assisted tools are tracking potential threats is space, the latest frontier in military competition.

China envisions using AI, including on satellites, to make decisions on who is and isnt an adversary, U.S. Space Force chief technology and innovation officer Lisa Costa, told an online conference this month.

The U.S. aims to keep pace.

An operational prototype called Machina used by Space Force keeps tabs autonomously on more than 40,000 objects in space, orchestrating thousands of data collections nightly with a global telescope network.

Machinas algorithms marshal telescope sensors. Computer vision and large language models tell them what objects to track. And AI choreographs drawing instantly on astrodynamics and physics datasets, Col. Wallace Rhet Turnbull of Space Systems Command told a conference in August.

Another AI project at Space Force analyzes radar data to detect imminent adversary missile launches, he said.

Elsewhere, AIs predictive powers help the Air Force keep its fleet aloft, anticipating the maintenance needs of more than 2,600 aircraft including B-1 bombers and Blackhawk helicopters.

Machine-learning models identify possible failures dozens of hours before they happen, said Tom Siebel, CEO of Silicon Valley-based C3 AI, which has the contract. C3s tech also models the trajectories of missiles for the the U.S. Missile Defense Agency and identifies insider threats in the federal workforce for the Defense Counterintelligence and Security Agency.

Among health-related efforts is a pilot project tracking the fitness of the Armys entire Third Infantry Division more than 13,000 soldiers. Predictive modeling and AI help reduce injuries and increase performance, said Maj. Matt Visser.

In Ukraine, AI provided by the Pentagon and its NATO allies helps thwart Russian aggression.

NATO allies share intelligence from data gathered by satellites, drones and humans, some aggregated with software from U.S. contractor Palantir. Some data comes from Maven, the Pentagons pathfinding AI project now mostly managed by the National Geospatial-Intelligence Agency, say officials including retired Air Force Gen. Jack Shanahan, the inaugural Pentagon AI director,

Maven began in 2017 as an effort to process video from drones in the Middle East spurred by U.S. Special Operations forces fighting ISIS and al-Qaeda and now aggregates and analyzes a wide array of sensor- and human-derived data.

AI has also helped the U.S.-created Security Assistance Group-Ukraine help organize logistics for military assistance from a coalition of 40 countries, Pentagon officials say.

To survive on the battlefield these days, military units must be small, mostly invisible and move quickly because exponentially growing networks of sensors let anyone see anywhere on the globe at any moment, then-Joint Chiefs chairman Gen. Mark Milley observed in a June speech. And what you can see, you can shoot.

To more quickly connect combatants, the Pentagon has prioritized the development of intertwined battle networks called Joint All-Domain Command and Control to automate the processing of optical, infrared, radar and other data across the armed services. But the challenge is huge and fraught with bureaucracy.

Christian Brose, a former Senate Armed Services Committee staff director now at the defense tech firm Anduril, is among military reform advocates who nevertheless believe they may be winning here to a certain extent.

The argument may be less about whether this is the right thing to do, and increasingly more about how do we actually do it -- and on the rapid timelines required, he said. Broses 2020 book, The Kill Chain, argues for urgent retooling to match China in the race to develop smarter and cheaper networked weapons systems.

To that end, the U.S. military is hard at work on human-machine teaming. Dozens of uncrewed air and sea vehicles currently keep tabs on Iranian activity. U.S. Marines and Special Forces also use Andurils autonomous Ghost mini-copter, sensor towers and counter-drone tech to protect American forces.

Industry advances in computer vision have been essential. Shield AI lets drones operate without GPS, communications or even remote pilots. Its the key to its Nova, a quadcopter, which U.S. special operations units have used in conflict areas to scout buildings.

On the horizon: The Air Forces loyal wingman program intends to pair piloted aircraft with autonomous ones. An F-16 pilot might, for instance, send out drones to scout, draw enemy fire or attack targets. Air Force leaders are aiming for a debut later this decade.

The loyal wingman timeline doesnt quite mesh with Replicators, which many consider overly ambitious. The Pentagons vagueness on Replicator, meantime, may partly intend to keep rivals guessing, though planners may also still be feeling their way on feature and mission goals, said Paul Scharre, a military AI expert and author of Four Battlegrounds.

Anduril and Shield AI, each backed by hundreds of millions in venture capital funding, are among companies vying for contracts.

Nathan Michael, chief technology officer at Shield AI, estimates they will have an autonomous swarm of at least three uncrewed aircraft ready in a year using its V-BAT aerial drone. The U.S. military currently uses the V-BAT -- without an AI mind -- on Navy ships, on counter-drug missions and in support of Marine Expeditionary Units, the company says.

It will take some time before larger swarms can be reliably fielded, Michael said. Everything is crawl, walk, run -- unless youre setting yourself up for failure.

The only weapons systems that Shanahan, the inaugural Pentagon AI chief, currently trusts to operate autonomously are wholly defensive, like Phalanx anti-missile systems on ships. He worries less about autonomous weapons making decisions on their own than about systems that dont work as advertised or kill noncombatants or friendly forces.

The departments current chief digital and AI officer Craig Martell is determined not to let that happen.

Regardless of the autonomy of the system, there will always be a responsible agent that understands the limitations of the system, has trained well with the system, has justified confidence of when and where its deployable -- and will always take the responsibility, said Martell, who previously headed machine-learning at LinkedIn and Lyft. That will never not be the case.

As to when AI will be reliable enough for lethal autonomy, Martell said it makes no sense to generalize. For example, Martell trusts his cars adaptive cruise control but not the tech thats supposed to keep it from changing lanes. As the responsible agent, I would not deploy that except in very constrained situations, he said. Now extrapolate that to the military.

Martells office is evaluating potential generative AI use cases it has a special task force for that but focuses more on testing and evaluating AI in development.

One urgent challenge, says Jane Pinelis, chief AI engineer at Johns Hopkins Universitys Applied Physics Lab and former chief of AI assurance in Martells office, is recruiting and retaining the talent needed to test AI tech. The Pentagon cant compete on salaries. Computer science PhDs with AI-related skills can earn more than the militarys top-ranking generals and admirals.

Testing and evaluation standards are also immature, a recent National Academy of Sciences report on Air Force AI highlighted.

Might that mean the U.S. one day fielding under duress autonomous weapons that dont fully pass muster?

We are still operating under the assumption that we have time to do this as rigorously and as diligently as possible, said Pinelis. I think if were less than ready and its time to take action, somebody is going to be forced to make a decision.

See original here:

A new Pentagon program aims to speed up decisions on what AI tech is trustworthy enough to deploy - The Associated Press

Read More..

US closer to using AI-drones that can autonomously decide to kill humans – Business Insider

South Korea's military drones fly in formation during a joint military drill with the US at Seungjin Fire Training Field in Pocheon on May 25, 2023. YELIM LEE

The deployment of AI-controlled drones that can make autonomous decisions about whether to kill human targets is moving closer to reality, The New York Times reported.

Lethal autonomous weapons, that can select targets using AI, are being developed by countries including the US, China, and Israel.

The use of the so-called "killer robots" would mark a disturbing development, say critics, handing life and death battlefield decisions to machines with no human input.

Several governments are lobbying the UN for a binding resolution restricting the use of AI killer drones, but the US is among a group of nations which also includes Russia, Australia, and Israel who are resisting any such move, favoring a non-binding resolution instead, The Times reported.

"This is really one of the most significant inflection points for humanity," Alexander Kmentt, Austria's chief negotiator on the issue, told The Times. "What's the role of human beings in the use of force it's an absolutely fundamental security issue, a legal issue and an ethical issue."

The Pentagon is working toward deploying swarms of thousands of AI-enabled drones, according to a notice published earlier this year.

In a speech in August, US Deputy Secretary of Defense, Kathleen Hicks, said technology like AI-controlled drone swarms would enable the US to offset China's People's Liberation Army's (PLA) numerical advantage in weapons and people.

"We'll counter the PLA's mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat," she said, reported Reuters.

Frank Kendall, the Air Force secretary, told The Times that AI drones will need to have the capability to make lethal decisions while under human supervision.

"Individual decisions versus not doing individual decisions is the difference between winning and losing and you're not going to lose," he said.

"I don't think people we would be up against would do that, and it would give them a huge advantage if we put that limitation on ourselves."

The New Scientist reported in October that AI-controlled drones have already been deployed on the battlefield by Ukraine in its fight against the Russian invasion, though it's unclear if any have taken action resulting in human casualties.

The Pentagon did not immediately respond to a request for comment.

Loading...

Go here to see the original:

US closer to using AI-drones that can autonomously decide to kill humans - Business Insider

Read More..

AI Is the New Industrial Revolution. How Jobs and Work Will Change. – Barron’s

The rise of generative artificial intelligence heralds a new stage of the Industrial Revolution, one where machines think, learn, self-replicate, and can master many tasks that were once reserved for humans.This phase will be just as disruptiveand transformativeas the previous ones.

That AI technology will come for jobs is certain. The destruction and creation of jobs is a defining characteristic of the Industrial Revolution. Less certain is what kind of new jobsand how manywill take their place.

Some scholars divide the Industrial Revolution into three stages: steam, which started around 1770; electricity, in 1870; and information in 1950. Think of the automobile industry replacing the horse-and-carriage trade in the first decades of the 20th century, or IT departments supplanting secretarial pools in recent decades.

In all of these cases, some people get left behind. The new jobs can be vastly different in nature, requiring novel skills and perhaps relocation, such as from farm to city in the first Industrial Revolution.

As shares of companies involved in the AI industry have soared, concerns about job security has grown. AI is finding its way into all aspects of life, from chatbots to surgery to battlefield drones. AI was at the center of this years highest-profile labor disputes, involving industries as disparate as Detroit car makers and Hollywood screenwriters. AI was on the agenda of the recent summit between President Joe Biden and Chinese President Xi Jinping.

Advertisement - Scroll to Continue

The advances in AI technology are coming fast, with some predicting singularitythe theoretical point when machines evolve beyond human controla few years away. If thats true, job losses would be the least of worries.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, wrote a group of industry leaders, technologists, and academics this year in an open letter.

Assuming we survive, what can the past show us about how we will work withor forthese machines in the future?

Advertisement - Scroll to Continue

Consider the first Industrial Revolution, where mortals fashioned their own crude machines. Run on Britains inexpensive and abundant coal and manned by its cheap and abundant unskilled labor, steam engines powered trains, ships, and factories. The U.K. became a manufacturing powerhouse.

Not everyone welcomed the mechanical competition.

A wanted poster from January 1812, in Nottingham, England, offers a 200-pound reward for information about masked men who broke into a local workshop and wantonly and feloniously broke and destroyed five stocking frames (mechanical knitting machines).

Advertisement - Scroll to Continue

The vandals were Luddites, textile artisans who waged a campaign of destruction against manufacturing between 1811 and 1817. They werent so much opposed to the machines as they were to a factory system that no longer valued their expertise.

Machine-breaking was an early form of job action, collective bargaining by riot, as historian Eric Hobsbawm put it. It was a precursor to many labor disputes to follow.

The second Industrial Revolution, kick-started by the completion of the transcontinental railroad in 1869, propelled the U.S. to global dominance. Breakthroughs including electricity, mass production, and the corporation transformed the world with marvels like cars, airplanes, refrigerators, and radios.

These advances also drew a backlash from people whose jobs were threatened.

Advertisement - Scroll to Continue

Only the lovers who flock to the dimmest nooks of the parks to hold hands and spoon found no fault with the striking lamplighters last night, the New-York Tribune wrote on April 26, 1907, after a walkout by the men who hand-lit the citys 25,000 gas streetlights each night.

The lamplighters struck over claims of union busting, but the real enemy was in plain sight: the electric lightbulb.

Advertisement - Scroll to Continue

In the downtown part of Manhattan, where there are electric lights in plenty, there was no inconvenience, the Tribune reported. The days of the lamplighters centuries-old trade were numbered.

Numbered also were the days of carriage makers, icemen, and elevator operators.

The third Industrial Revolution, meanwhile, rang the death knell for switchboard operators, newspaper typesetters, and most anyone whose job could be done by a computer.

Those lost jobs were replaced, in spades. The rise of personal computing and the internet led directly to the loss of 3.5 million U.S. jobs since 1980, according to McKinsey Global Institute in 2018. At the same time, new technologies created 19 million new jobs.

Looking ahead, MGI estimates technological advances might force as many as 375 million workers globally, out of 2.7 billion total, to switch occupations by 2030.

A survey conducted by LinkedIn for the World Economic Forum offers hints about where job growth might come from. Of the five fastest-growing job areas between 2018 and 2022, all but one involve people skills: sales and customer engagement; human resources and talent acquisition; marketing and communications; partnerships and alliances. The other: technology and IT. Even the robots will need their human handlers.

McKinsey Globals Michael Chui suggests people wont be replaced by technology in the future so much as they will partner more deeply with it.

Almost all of us are cyborgs nowadays, in some sense, he told Barrons, pointing to the headphones he was wearing during a Zoom discussion.

In The Iliad, 28 centuries ago, Homer describes robotic slaves crafted by the god Hephaestus. Chui doesnt expect humanoid robots, like Homers creations, to come down and do everything we once did.

For most of us, he says, its parts of our jobs that machines will actually take over.

Each wave of the Industrial Revolution brought greater prosperityeven if it wasnt equally sharedadvances in science and medicine, cheaper goods, and a more connected world. The AI wave might even do more.

Ive described it as giving us super powers, and I think its true, Chui says.

Superpowers or extinctionstarkly different visions for our brave, new AI future. Best hang on.

Write to editors@barrons.com

See the original post here:

AI Is the New Industrial Revolution. How Jobs and Work Will Change. - Barron's

Read More..

Chruciki concerns and the next chess grandmaster – Polish American News

Got a question? Just ask!Q: Is chruciki better made with bleached or unbleached flour? Stella, CTA: Follow-up reply: I never use bleached flower for anything unless a recipe specifically calls for it. That being said,chruciki is plural, therefore, it should read Are chruciki Sorry, not being snarky, but as an editor I have to comment. Krysia, NY. My late wife Majka was the chruciki expert. During our two years in Bay City, MI, she and a lady friend from Poland set up a little local chruciki business. Once a week they would supply a dozen groceries and one ice-cream parlor with those pastries, and locals said they were the best they had ever tasted. That was probably because they were fried in pork lard. One week the two ladies couldnt get Michigans Robin Hood brand of enriched flour, so they used Gold Medal instead. The chruciki came out tough and brittle and customers complained. That is not to suggest that Gold Medal is bad, only that they had adjusted their recipe and procedures to a familiar brand. Robert Strybel, Warsaw Q: An Israeli Army spokesperson I understand has Polish heritage, is that correct? I heard him speak to a news anchor and he mentioned it. Frank, NY A: Good catch, Frank. Yes, Jonathan Conricus was a former Lieutenant Colonel with the Israeli Defense Forces and now serves as one of their spokespersons. His father is Swedish but his mother is Polish-Jewish. Q: I agree with your views about the ideological problem in the Middle East, but for the sake of life, Israel should stop and negotiate for the lives of the hostages. Life is supposed to be our main focus after all. Donna, NY A: Respectfully, Donna, I disagree. I do agree with life being the main focus, but where is the rationale in negotiating with a group of people who commit atrocious massacre, then strip a girl naked and parade her through the streets in the back of a pickup truck while apparently Gazans, including kids as seen in the video released by Hamas, spit at her, culminating in her being beheaded? Its interesting however, we see some networks magnifying coverage of the moans and groans and tears of the Gazans who have lost children and loved ones in the battle. Perhaps there is a silver lining to this tragedy of reportedly 10,000 civilians killed, in that hopefully now given all these Gazan deaths, life will mean something and cause Gazans to rise up and evict any group among them who does not value human life. Q: How do you tell the expiration date on a can of Polish beer? Mickey, NJ A: Thanks for your question Mickey. This dilemma has arisen in the past, especially during summer when ice cold Polish piwo hits the spot. It has become a bit difficult, especially attempting to decode a series of letters and numbers on the bottom of a can of beer. Unfortunately the internet does not always provide adequate instruction for foreign beers. I will reach out to my colleague Robert Strybel in Warsaw to see if he can contact a Polish brewery to get the definitive method. Q: I know you live in New York City and was wondering your take on the Thanksgiving Day Parades latest decision to represent transgender people which has caused an outcry with some traditionalists. How will this affect the Pulaski Day Parade too? Dorothy, NJA: The way things are trending Dorothy, the future may never see such parades any longer, or at least in their original form. I brought up a similar issue about the Fourth of July fireworks show, when it seemed like the fireworks took a back seat to the promotion of the latest craze in hip-hop and rap music. Whats happening here? Is it that our cultural infrastructure is being usurped by every kind of trendy movement? And if organizers dont yield to the trendies, they get sued for discrimination? In my view, every kind of stance and position, including transgender, hip-hop artists, etc., save for supremacy, vulgarity, and nudity, should be allowed to be represented at parades and shows, with the understanding that no group should dominate or be focused on more than any other. Unfortunately, too many of these groups want themselves in the spotlight which can cause parades to take on a whole new meaning. Q: My son is into chess and subscribed to a magazine which had a story about Grzegorz Gajewski. Any news about this sport would be a good booster for him. Thank you! Hank, NY A: Queen to D4, Bishop to C3? Great to hear, Hank. How many kids these days are into chess? The once heralded Gajewski, now at 38 years of age, may be fading into the sunset however. It looks like the new kid on the block is 25-year-old Jan-Krzysztof Duda, pictured. Ill yield to the sports desk for a possible future full story, but suffice it to say this youngster, who is Polands number one chess player, has been on fire for the last two years, either winning or finishing high in multiple global events. The Polish government even awarded him the Golden Cross of Merit. And get this, unlike most people his age, in his spare time this kid listens to Beethoven and Mozart.

More here:
Chruciki concerns and the next chess grandmaster - Polish American News

Read More..

5 Lessons I’ve Learned From Using AI (Opinion) – Education Week

Artificial intelligence is all the rage right now, and it will be for the rest of my lifetime. In this Harvard Business Review article, McAfee, Rock, and Brynjolfsson refer to AI as a general-purpose technology akin to electricity, the steam engine, and the internet. Before I go any further, and risk getting comments thrown at me on social media, I do understand that there are IP concerns, among other issues that we still have to work out. However, what we also know is that AI is here to stay, so we as educators can either get on board with it or we can be once again deemed behind the times where our students use the technology at home and come to school to go back a century or two.

Over the past few months, I have used AI more and more. Partly because I wanted to see what all of the fuss was about and also because I needed to play around with it to see if it was something that I should be using in my role as an author, consultant, and owner of a company.

For full disclosure, I am not a techie. I have created a website for the Instructional Leadership Collective, designed courses through Thinkific, and use Mentimeter for all of my virtual and in-person workshops. However, I would never consider myself an expert in using technology, which is why I am trying out AI. I feel that every once in a while, we should feel uncomfortable as we learn, because being uncomfortable during learning (in a psychologically safe environment) can result in deeper, more rigorous learning experiences.

Heres What Ive Learned Through AIAs with anything that is new for me, I like to start small. When I began hearing more and more about AI, I decided I would engage in a low-risk activity, which leads me to the first lesson I learned through AI. I used it for cooking. Yes, cooking.

After some major inspiration over the last six months, I began experimenting with gourmet cooking. Please keep in mind that prior to June of this year, I struggled to open a can of tuna fish, so it may surprise you that I now use a sous vide or Big Green Egg to make filet mignon, bleu cheese turkey burgers with pesto, salmon, halibut, or sesame chicken with my own sesame sauce. What does that have to do with AI? I use AI to give me recipes for special sauces, like the one I will make for the pumpkin ravioli I will be serving to guests tonight.

Secondly, I use AI to ask better questions. In a previous blog, I wrote about using AI as a leadership coaching assistant. After sessions are done, I go back and reflect on the questions I asked and have engaged in reading books to help me better learn what questions I could be asking. Additionally, I find that when I ask AI a question and the answers it provides are not always on point, there have been many times that it was my question that encouraged that answer. I needed to go back and rephrase the question so AI understood me better. Thats something we can always do in conversations with humans.

Along with using the AI personal assistant to ask better questions, I also have learned to use AI to see how much I talk during sessions as opposed to the people I coach. Fortunately, I have seen that in most cases, the person being coached does talk more than I do. However, there have been those times where I cut it close, and that matters to me. I strive to listen more than I talk.

The fourth lesson I have learned when using AI is that it helps keep me inspired during moments when I lack inspiration. I have used it to give me a boost when considering keynotes, workshop activities, or topics to cover in this blog. Is it perfect, no. However, I noticed that although it may not give me the exact information I need, it has inspired me to read what it gives and think, I wonder if I could, and new ideas come to me during those moments.

Lastly, in reading this outstanding ISTE article, I learned that there are several types of AI, which are:

Reactive - Tools that respond to specific inputs or situations without learning (i.e., Alexa)

Predictive - Tools that analyze historical data and experiences to predict future events or behaviors (i.e., Netflix).

Generative - Tools that generate new content or outputs, often creating something novel from learned patterns (ChatGPT).

In the EndThose who fear AI have probably been using it already when they ask Alexa to play a song or when they get on Netflix and click on the movie that Netflix said they might want to watch. It seems that generative is the one that makes most people nervous because of IP rules or that it may not be able to provide the most accurate information, which is kind of ironic given how many people spread misinformation through gossip.

Read the original here:

5 Lessons I've Learned From Using AI (Opinion) - Education Week

Read More..

Pentagon’s AI initiatives accelerate hard decisions on lethal autonomous weapons. – ABC News

NATIONAL HARBOR, Md. -- Artificial intelligence employed by the U.S. military has piloted pint-sized surveillance drones in special operations forces missions and helped Ukraine in its war against Russia. It tracks soldiers fitness, predicts when Air Force planes need maintenance and helps keep tabs on rivals in space.

Now, the Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China. The ambitious initiative dubbed Replicator seeks to galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap, and many, Deputy Secretary of Defense Kathleen Hicks said in August.

While its funding is uncertain and details vague, Replicator is expected to accelerate hard decisions on what AI tech is mature and trustworthy enough to deploy - including on weaponized systems.

There is little dispute among scientists, industry experts and Pentagon officials that the U.S. will within the next few years have fully autonomous lethal weapons. And though officials insist humans will always be in control, experts say advances in data-processing speed and machine-to-machine communications will inevitably relegate people to supervisory roles.

Thats especially true if, as expected, lethal weapons are deployed en masse in drone swarms. Many countries are working on them and neither China, Russia, Iran, India or Pakistan have signed a U.S.-initiated pledge to use military AI responsibly.

Its unclear if the Pentagon is currently formally assessing any fully autonomous lethal weapons system for deployment, as required by a 2012 directive. A Pentagon spokeswoman would not say.

Paradigm shifts

Replicator highlights immense technological and personnel challenges for Pentagon procurement and development as the AI revolution promises to transform how wars are fought.

"The Department of Defense is struggling to adopt the AI developments from the last machine-learning breakthrough, said Gregory Allen, a former top Pentagon AI official now at the Center for Strategic and International Studies think tank.

The Pentagon's portfolio boasts more than 800 AI-related unclassified projects, much still in testing. Typically, machine-learning and neural networks are helping humans gain insights and create efficiencies.

The AI that weve got in the Department of Defense right now is heavily leveraged and augments people, said Missy Cummings, director of George Mason Universitys robotics center and a former Navy fighter pilot. Theres no AI running around on its own. People are using it to try to understand the fog of war better.

Space, war's new frontier

One domain where AI-assisted tools are tracking potential threats is space, the latest frontier in military competition.

China envisions using AI, including on satellites, to "make decisions on who is and isnt an adversary, U.S. Space Force chief technology and innovation officer Lisa Costa, told an online conference this month.

The U.S. aims to keep pace.

An operational prototype called Machina used by Space Force keeps tabs autonomously on more than 40,000 objects in space, orchestrating thousands of data collections nightly with a global telescope network.

Machina's algorithms marshal telescope sensors. Computer vision and large language models tell them what objects to track. And AI choreographs drawing instantly on astrodynamics and physics datasets, Col. Wallace Rhet Turnbull of Space Systems Command told a conference in August.

Another AI project at Space Force analyzes radar data to detect imminent adversary missile launches, he said.

Maintaining planes and soldiers

Elsewhere, AI's predictive powers help the Air Force keep its fleet aloft, anticipating the maintenance needs of more than 2,600 aircraft including B-1 bombers and Blackhawk helicopters.

Machine-learning models identify possible failures dozens of hours before they happen, said Tom Siebel, CEO of Silicon Valley-based C3 AI, which has the contract. C3's tech also models the trajectories of missiles for the the U.S. Missile Defense Agency and identifies insider threats in the federal workforce for the Defense Counterintelligence and Security Agency.

Among health-related efforts is a pilot project tracking the fitness of the Army's entire Third Infantry Division more than 13,000 soldiers. Predictive modeling and AI help reduce injuries and increase performance, said Maj. Matt Visser.

Aiding Ukraine

In Ukraine, AI provided by the Pentagon and its NATO allies helps thwart Russian aggression.

NATO allies share intelligence from data gathered by satellites, drones and humans, some aggregated with software from U.S. contractor Palantir. Some data comes from Maven, the Pentagons pathfinding AI project now mostly managed by the National Geospatial-Intelligence Agency, say officials including retired Air Force Gen. Jack Shanahan, the inaugural Pentagon AI director,

Maven began in 2017 as an effort to process video from drones in the Middle East spurred by U.S. Special Operations forces fighting ISIS and al-Qaeda and now aggregates and analyzes a wide array of sensor- and human-derived data.

AI has also helped the U.S.-created Security Assistance Group-Ukraine help organize logistics for military assistance from a coalition of 40 countries, Pentagon officials say.

All-Domain Command and Control

To survive on the battlefield these days, military units must be small, mostly invisible and move quickly because exponentially growing networks of sensors let anyone see anywhere on the globe at any moment, then-Joint Chiefs chairman Gen. Mark Milley observed in a June speech. And what you can see, you can shoot.

To more quickly connect combatants, the Pentagon has prioritized the development of intertwined battle networks called Joint All-Domain Command and Control to automate the processing of optical, infrared, radar and other data across the armed services. But the challenge is huge and fraught with bureaucracy.

Christian Brose, a former Senate Armed Services Committee staff director now at the defense tech firm Anduril, is among military reform advocates who nevertheless believe they "may be winning here to a certain extent."

The argument may be less about whether this is the right thing to do, and increasingly more about how do we actually do it -- and on the rapid timelines required," he said. Brose's 2020 book, The Kill Chain, argues for urgent retooling to match China in the race to develop smarter and cheaper networked weapons systems.

To that end, the U.S. military is hard at work on "human-machine teaming." Dozens of uncrewed air and sea vehicles currently keep tabs on Iranian activity. U.S. Marines and Special Forces also use Andurils autonomous Ghost mini-copter, sensor towers and counter-drone tech to protect American forces.

Industry advances in computer vision have been essential. Shield AI lets drones operate without GPS, communications or even remote pilots. It's the key to its Nova, a quadcopter, which U.S. special operations units have used in conflict areas to scout buildings.

On the horizon: The Air Forces loyal wingman program intends to pair piloted aircraft with autonomous ones. An F-16 pilot might, for instance, send out drones to scout, draw enemy fire or attack targets. Air Force leaders are aiming for a debut later this decade.

The race to full autonomy

The loyal wingman timeline doesn't quite mesh with Replicator's, which many consider overly ambitious. The Pentagon's vagueness on Replicator, meantime, may partly intend to keep rivals guessing, though planners may also still be feeling their way on feature and mission goals, said Paul Scharre, a military AI expert and author of Four Battlegrounds.

Anduril and Shield AI, each backed by hundreds of millions in venture capital funding, are among companies vying for contracts.

Nathan Michael, chief technology officer at Shield AI, estimates they will have an autonomous swarm of at least three uncrewed aircraft ready in a year using its V-BAT aerial drone. The U.S. military currently uses the V-BAT -- without an AI mind -- on Navy ships, on counter-drug missions and in support of Marine Expeditionary Units, the company says.

It will take some time before larger swarms can be reliably fielded, Michael said. Everything is crawl, walk, run -- unless youre setting yourself up for failure.

The only weapons systems that Shanahan, the inaugural Pentagon AI chief, currently trusts to operate autonomously are wholly defensive, like Phalanx anti-missile systems on ships. He worries less about autonomous weapons making decisions on their own than about systems that dont work as advertised or kill noncombatants or friendly forces.

The department's current chief digital and AI officer Craig Martell is determined not to let that happen.

Regardless of the autonomy of the system, there will always be a responsible agent that understands the limitations of the system, has trained well with the system, has justified confidence of when and where its deployable -- and will always take the responsibility, said Martell, who previously headed machine-learning at LinkedIn and Lyft. That will never not be the case.

As to when AI will be reliable enough for lethal autonomy, Martell said it makes no sense to generalize. For example, Martell trusts his car's adaptive cruise control but not the tech thats supposed to keep it from changing lanes. As the responsible agent, I would not deploy that except in very constrained situations, he said. Now extrapolate that to the military.

Martells office is evaluating potential generative AI use cases it has a special task force for that but focuses more on testing and evaluating AI in development.

One urgent challenge, says Jane Pinelis, chief AI engineer at Johns Hopkins Universitys Applied Physics Lab and former chief of AI assurance in Martells office, is recruiting and retaining the talent needed to test AI tech. The Pentagon can't compete on salaries. Computer science PhDs with AI-related skills can earn more than the military's top-ranking generals and admirals.

Testing and evaluation standards are also immature, a recent National Academy of Sciences report on Air Force AI highlighted.

Might that mean the U.S. one day fielding under duress autonomous weapons that dont fully pass muster?

We are still operating under the assumption that we have time to do this as rigorously and as diligently as possible, said Pinelis. I think if were less than ready and its time to take action, somebody is going to be forced to make a decision.

Read the rest here:

Pentagon's AI initiatives accelerate hard decisions on lethal autonomous weapons. - ABC News

Read More..

unveil new AI-enabled imaging innovations at #RSNA – News | Philips – Philips

Philips HealthSuite Imaging is a cloud-based next generation of Philips Vue PACS, enabling radiologists and clinicians to adopt new capabilities faster, increase operational efficiency and improving patient care. HealthSuite Imaging on Amazon Web Services (AWS) offers new capabilities such as high-speed remote access for diagnostic reading, integrated reporting and AI-enabled workflow orchestration, all delivered securely via the cloud to ease IT management burden. Also unveiled at RSNA is Philips AI Manager, an end-to-end AI enablement solution that integrates with a customer's IT infrastructure, allowing radiologists to leverage more than 100 AI applications for a more comprehensive assessment and deeper clinical insights in the radiology workflow.

Speed and efficiency are critical to diagnosis and treatment. At RSNA Philips will also spotlight its newest innovations in Digital X-ray including Philips Radiography 7000 M, a premium mobile radiography solution designed to offer enhanced care and higher operational efficiency for faster and efficient patient care, and Philips Radiography 7300 C premium digital radiography system designed to deliver high efficiency and clinical versatility. Also featured is the next-generation Image Guided Therapy System Azurion 7 B20/15 biplane configuration, providing superb positioning capability for easier patient access during minimally invasive procedures, faster system movement, and full table side control of all components.

View post:

unveil new AI-enabled imaging innovations at #RSNA - News | Philips - Philips

Read More..

Quizon aims for GM title in round-robin chess tournament – The Manila Times

International Master Daniel Quizon will compete against higher rated chess players in the Kamatyas Invitational GM Tournament 2023 to be held at the GM Eugene Torre Chess Museum in Pan de Amerikana, Concepcion Dos, Marikina City held from December 9 to 17, 2023.

Aside from topping the round-robin tournament, the pride of Dasmarias City also aims to become the country's 18th chess Grandmaster.

Quizon has actually earned the third and final GM norm following his golden performance in the open under-20 standard division of the 21st ASEAN+ Age-Group Chess Championships in Bangkok, Thailand last June.

However, Quizon still needs to compete in a round-robin tournament where at least half of the competitors are titled players and raise his current Elo rating of 2417 to 2500.

International Master Michael Concio Jr. (Elo 2403 ), who is also aiming to become a Grandmaster, is also taking a part in this weeklong FIDE-rated standard chess tournament hosted by International Master Roderick Nava and National Master David Almirol Jr. of Kamatyas Chess Club in cooperation with the National Chess Federation of the Philippines. Concio clinched his first of three GM norms plus an outright International Master title after winning the Eastern Asia Juniors Chess Championships held at the Lima Hotel in Tanauan, Batangas in 2019.

Quizon and Concio are members of the star-studded Dasmarias Chess Academy backed by Mayor Jenny Barzaga, Rep. Elpidio "Pidi" Barzaga Jr. and national coach FIDE Master Roel Abelgas

Also competing are Grandmaster Susanto Megaranto of Indonesia (2495), Grandmaster Tran Tuan Minh of Vietnam (2484), Grandmaster Wei Ming Kevin Goh (2471), Women International Master Miaoyi Lu of China (2281) and International Master Joel Banawa of USA (2414),

The other Filipino players invited to join the tournament are International Master Paulo Bersamina (2428), International Master Emmanuel Garcia (2403) and Grandmaster Rogelio "Joey" Antonio Jr. (2386).

More here:
Quizon aims for GM title in round-robin chess tournament - The Manila Times

Read More..

Over 1000 players to take part in Alef Chess Olympiad 2023 in Sharjah – Gulf Today

Dr Sheikh Khalid Bin Humaid Al Qasimi, Issa Ataya and Abdullah Murad Al Mazmi during the press conference.

The Olympiad will be held at Al Mamsha from Nov.27 to Dec.18, 2023 with the participation of over 1,000 players from different countries. The details of the Alef International Chess Olympiad were disclosed in a press conference held at the Sharjah Cultural & Chess Club.

Dr Sheikh Khalid Bin Humaid Al Qasimi, Chairman of the Board of Directors of the Sharjah Cultural & Chess Club; Imran Abdullah Al Nuaimi, Vice Chairman of the Sharjah Cultural & Chess Club, and Issa Ataya, CEO of Alef Group, addressed the media.

Dr Sheikh Khalid hailed His Highness Dr Sheikh Sultan Bin Mohammed Al Qasimi, Supreme Council Member and Ruler of Sharjah, for his unwavering support of the club and its players.

He also thanked Sheikh Sultan Bin Muhammad Bin Sultan Al Qasimi, Crown Prince and Deputy Ruler of Sharjah, and Sheikh Abdullah Bin Salem Al Qasimi, Deputy Ruler of Sharjah, for their support.

Dr Sheikh Khalid also thanked Eisa Hilal Al Hazami and the Sharjah Sports Council for their continued support.

He affirmed that the organization of this tournament aims to provide a significant opportunity and great benefit for the club's players and the players of the national team.

We are pleased to announce the organization of the Alef International Chess Olympiad 2023, sponsored by Alef Group.

We are particularly thankful to Issa (Ataya), the CEO of Alef Group, for his tireless efforts in making efforts in organizing this Olympics a huge success, as over a thousand athletes from across the world will gain from taking part in these events. This marks the fourth time Alef Group has been serving as a sponsor for the tournaments organized by Sharjah Cultural & Chess Club.

Praising Alef Group, he stressed that the partnership with the Alfa Group reflects the keenness of both parties to support talents and develop strategic thinking among young people.

Nuaimi revealed that the Alef International Chess Olympiad includes four main tournaments: the Super Stars Championship, National Day, Open Stars, and Future Championship.

In the Super Stars Championship, four senior international masters, namely Yu Yangi from China (rating 2720), Sanan Gujjirov from Hungary (rating 2703), Nehal Sarin from India (rating 2692), and Salem Abdel Rahman from the UAE (rating 2635), will take part. Dhs55,000 is on offer at the championship.

The Stars Championship is open to players of all nationalities and is internationally classified; approximately 500 players will compete. The National Day Championship will see the participation of 80 Emirati players and coincide with the 52nd National Day of the UAE.

The Future Championship includes 400 players from the different schools in the country.

Ataya expressed his pride in the achievements of the Sharjah Cultural & Chess Club.

These achievements stand as a complement to the sporting journey and cultural renaissance in the UAE.

He commended the steadfast support provided by the wise leadership of the UAE across all sporting arenas. Ataya praised the support and attention given by Dr. Sheikh Sultan for the development of the sports sector and athletes.

"It is an honor for Alef Group to be a strategic partner and official sponsor for the Alef Chess Olympiad in Al Mamsha, Sharjah 2023, organized by the Sharjah Cultural and Chess Club. The event coincides with the National Day of the UAE."

He emphasized Alef Group's commitment to supporting this tournament, highlighting its cultural and youth significance in promoting analytical and strategic thinking, talent development, and fostering a spirit of fair competition.

Alef Group sees this as a continuation of the journey initiated by national and private entities in supporting sports and athletes.

Ataya expressed his gratitude to all those who are contributing to the success of the Chess Olympiad.

He also thanked Dr. Sheikh Khalid, as well as club members and arbitration committees.

See the original post:
Over 1000 players to take part in Alef Chess Olympiad 2023 in Sharjah - Gulf Today

Read More..

Commentary: Biden’s executive order on AI is ambitious and … – The Spokesman Review

Last month President Joe Biden issued an executive order on artificial intelligence, the governments most ambitious attempt to set ground rules for this technology. The order focuses on establishing best practices and standards for AI models, seeking to constrain Silicon Valleys propensity to release products before theyve been fully tested to move fast and break things.

But despite the orders scope its 111 pages and covers a range of issues, including industry standards and civil rights two glaring omissions may undermine its promise.

The first is that the order fails to address the loophole provided by Section 230 of the Communications Decency Act. Much of the consternation surrounding AI has to do with the potential for deep fakes convincing video, audio and image hoaxes and misinformation. The order does include provisions for watermarking and labeling AI content so people at least know how its been generated. But what happens if the content is not labeled?

Much of the AI-generated content will be distributed on social media sites such as Instagram and X (formerly Twitter). The potential harm is frightening: Already theres been a boom of deep fake nudes, including of teenage girls. Yet Section 230 protects platforms from liability for most content posted by third parties. If the platform has no liability for distributing AI-generated content, what incentive does it have to remove it, water-marked or not?

Imposing liability only on the producer of the AI content, rather than on the distributor, will be ineffective at curbing deep fakes and misinformation because the content producer may be hard to identify, out of jurisdictional bounds or unable to pay if found liable. Shielded by Section 230, the platform may continue to spread harmful content and may even receive revenue for it if its in the form of an ad.

A bipartisan bill sponsored by Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo., seeks to address this liability loophole by removing 230 immunity for claims and charges related to generative artificial intelligence. The bill does not, however, seem to resolve the question of how to apportion responsibility between the AI companies that generate the content and the platforms that host it.

The second worrisome omission from the AI order involves terms of service, the annoying fine print that plagues the internet and pops up with every download. Although most people hit accept without reading these terms, courts have held that they can be binding contracts. This is another liability loophole for companies that make AI products and services: They can unilaterally impose long and complex one-sided terms allowing illegal or unethical practices and then claim we have consented to them.

In this way, companies can bypass the standards and best practices set by advisory panels. Consider what happened with Web 2.0 (the explosion of user-generated content dominated by social media sites). Web tracking and data collection were ethically and legally dubious practices that contravened social and business norms. Facebook, Google and others, however, could defend themselves by claiming that users consented to these intrusive practices when they clicked to accept the terms of service.

In the meantime, companies are releasing AI products to the public, some without adequate testing and encouraging consumers to try out their products for free. Consumers may not realize that their free use helps train these models and so their efforts are essentially unpaid labor. They also may not realize that they are giving up valuable rights and taking on legal liability.

For example, Open AIs terms of service state that the services are provided as is, with no warranty, and that the user will defend, indemnify, and hold harmless Open AI from any claims, losses, and expenses (including attorneys fees) arising from use of the services. The terms also require the user to waive the right to a jury trial and class action lawsuit. Bad as such restrictions may seem, they are standard across the industry. Some companies even claim a broad license to user-generated AI content.

Bidens AI order has largely been applauded for trying to strike a balance between protecting the public interest and innovation. But to give the provisions teeth, there must be enforcement mechanisms and the threat of lawsuits. The rules to be established under the order should limit Section 230 immunity and include standards of compliance for platforms. These might include procedures for reviewing and taking down content, mechanisms to report issues within the company and externally, and minimum response times from companies to external concerns. Furthermore, companies should not be allowed to use terms of service (or other forms of consent) to bypass industry standards and rules.

We should heed the hard lessons from the last two decades to avoid repeating the same mistakes. Self-regulation for Big Tech simply does not work, and broad immunity for profit-seeking corporations creates socially harmful incentives to grow at all costs. In the race to dominate the fiercely competitive AI space, companies are almost certain to prioritize growth and discount safety. Industry leaders have expressed support for guardrails, testing and standardization, but getting them to comply will require more than their good intentions it will require legal liability.

Nancy Kim is a law professor at Chicago-Kent College of Law, Illinois Institute of Technology.

Read the rest here:

Commentary: Biden's executive order on AI is ambitious and ... - The Spokesman Review

Read More..