Category Archives: Ai

I Wore Meta Ray-Bans in Montreal to Test Their AI Translation Skills. It Did Not Go Well – WIRED

Imagine youve just arrived in another country, you dont speak the language, and you stumble upon a construction zone. The air is thick with dust. Youre tired. You still stink like airplane. You try to ignore the jackhammers to decipher what the signs say: Do you need to cross the street, or walk up another block, or turn around?

I was in exactly such a situation this week, but I came prepared. Id flown to Montreal to spend two days testing the new AI translation feature on Metas Ray-Ban smart sunglasses. Within 10 minutes of setting out on my first walk, I ran into a barrage of confusing orange detour signs.

The AI translation feature is meant to give wearers a quick, hands-free way to understand text written in foreign languages, so I couldnt have devised a better pop quiz on how it works in real time.

As an excavator rumbled, I looked at a sign and started asking my sunglasses to tell me what it said. Before I could finish, a harried Quebecois construction worker started shouting at me and pointing northwards, and I scurried across the street.

Right at the start of my AI adventure, Id run into the biggest limitation of this translation softwareit doesnt, at the moment, tell you what people say. It can only parse the written word.

I already knew that the feature was writing-only at the moment, so that was no surprise. But soon, Id run into its other less-obvious constraints. Over the next 48 hours, I tested the AI translation on a variety of street signs, business signs, advertisements, historical plaques, religious literature, childrens books, tourism pamphlets, and menuswith wildly varied results.

Sometimes it was competent, like when it told me that the book I picked up for my son, Trois Beaux Bbs, was about three beautiful babies. (Correct.) It told me repeatedly that ouvert meant open, which, to be frank, I already knew, but I wanted to give it some layups.

Other times, my robot translator was not up to the task. It told me that the sign for the notorious adult movie theater Cinma LAmour translated to Cinma LAmour. (F for effortGoogle Translate at least changed it to Cinema Love.)

At restaurants, I struggled to get it to read me every item on a menu. For example, instead of telling me all of the different burger options at a brew pub, it simply told me that there were burgers and sandwiches, and refused to get more specific despite my wheedling.

When I went to an Italian spot the next night, it similarly gave me a broad summary of the offerings rather than breaking them down in detailI was told there were grilled meat skewers, but not, for example, that there were duck confit, lamb, and beef options, or how much they cost.

All in all, right now, the AI translation is more of a temperamental party trick than a genuinely useful travel tool for foreign climes.

To use the AI translation, a glasses-wearer needs to say the following magic words: Hey Meta, look at and then ask it to translate what its looking at.

The glasses take a snapshot of whatever is in front of you, and then tell you about the text after a few seconds of processing. Id expected more straightforward translations, but it rarely spits out word-for-word breakdowns. Instead, it paraphrases what it sees or offers a broad summary.

View original post here:

I Wore Meta Ray-Bans in Montreal to Test Their AI Translation Skills. It Did Not Go Well - WIRED

Figma announces big redesign with AI – The Verge

Figma is announcing a bunch of new features at its Config conference today, including a major UI redesign, new generative AI tools to help people more easily make projects, and built-in slideshow functionality.

Lets start with the redesign, which is intended to lay the foundation for the next decade, according to a blog post. Youll see things like a new toolbar, rounded corners, and 200 new icons. As part of the design refresh, the company wants to focus the canvas less on our UI and more on your work and make something thats approachable to new users while still being useful to Figma experts.

Figma says this is the companys third significant redesign since Figmas closed beta launch. The new look is rolling out as part of a limited beta, and users can join a waitlist if they want to try it out.

Beyond the redesign, the headline feature addition is new generative AI tools, which look like a useful way to quickly get started with a design. Theyre basically a Figma-focused version of the draft an email-type AI tools weve seen many times.

In a briefing, Figma chief product officer Yuhki Yamashita showed me an example of how Figma could create an app design for a new restaurant. A few seconds after he typed the prompt into a textbox, Figma mocked up an app with menu listings, a tab bar, and even buttons for delivery partners like Uber Eats and DoorDash. It looked like a generic mobile app mock-up, but Yamashita was able to start tweaking it right away.

In another example, Yamashita asked Figma AI to spin up a design for a recipe page for chocolate chip cookies, and sure enough, it did including an AI-generated image of a cookie. Over Zoom, it looked like a pretty accurate image, but I cant imagine that a basic image of a chocolate chip cookie is hard for an AI generator to make.

Figma is also introducing AI features that could help speed up small tasks in big ways, such as AI-enhanced asset search and auto-generated text in designs instead of generic Lorem ipsum placeholder text.

Ideally, all of the new Figma AI tools will allow people who are newer to Figma to test ideas more easily while letting those who are more well versed in the app iterate more quickly, according to Yamashita. Were using AI to lower the floor and raise the ceiling, Yamashita says in an interview with The Verge something CEO Dylan Field has said to The Verge as well.

Figma AI is launching in a limited beta beginning on Wednesday, and interested users can get on the waitlist. Figma says the beta period will run through the end of the year. While in beta, Figmas AI tools will be free, but the company says it might have to introduce usage limits. Figma is also promising clear guidance on pricing when the AI features officially launch.

In a blog post, Figma also spelled out its approach to training its AI models. All of the generative features were launching today are powered by third-party, out-of-the-box AI models and were not trained on private Figma files or customer data, writes Kris Rasmussen, Figmas CTO. We fine-tuned visual and asset search with images of user interfaces from public, free Community files.

Rasmussen adds that Figma trains its models so they learn patterns and Figma-specificconcepts and tools but not from users content. Figma is also going to let Figma admins control whether Figma can train on customer content, which includes file content created in or uploaded to Figma by a user, such aslayer names and properties, text and images, comments, and annotations, according to Rasmussen.

Figma wont start training on this content until August 15th; however, you should know that Starter and Professional plans are by default opted in to share this data, while Organization and Enterprise plans are opted out.

The company is likely being specific about how it trains its AI models because of Adobes recent terms of service disaster, where the company had to clarify that it wouldnt train AI on your work.

In addition to the redesign and the new AI features, Figma is adding a potentially very practical new tool: Figma Slides, a Google Slides-like feature built right into Figma. Yamashita says that users have already been hacking Figma to find a way to make slides, so now theres an official method to build and share presentations right inside the app.

There are a few Figma-specific features that designers will likely appreciate. Youll be able to tweak designs youve included in the deck in real time using Figmas tools. (Note that those changes will only appear in the deck tweaks wont currently sync back to the original design files, though Yamashita says that Figma wants to make that possible eventually.)

You can also present an app prototype right from the deck, meaning you dont need to make a convoluted screen recording just to demonstrate how one piece connects to another. You can also add interactive features for audience members, like a poll or an alignment scale, where people can plot on a range if they agree or disagree with something.

Figma Slides will be available in open beta beginning on Wednesday. It will be free while in beta but will become a paid feature when it officially launches. The company is also adding new features for its developer mode in Figma, including a ready for dev task list.

This years Config is the first since Adobe abandoned its planned $20 billion acquisition of Figma following regulatory scrutiny. With the dissolution of the merger, Adobe was forced to pay Figma a $1 billion breakup fee.

Read more from the original source:

Figma announces big redesign with AI - The Verge

Google touts enterprise-ready AI with more facts and less make-believe – The Verge

Vertex AI, the Google Cloud development platform that allows companies to build services using Googles machine learning and large language models, is getting new capabilities to help prevent apps and services from pushing inaccurate information. After rolling out general availability for Vertex AIs Grounding with Google Search feature in May which enables models to retrieve live information from the internet Google has now announced that customers will also have the option to improve their services AI results with specialized third-party datasets.

Google says the service will utilize data from providers like Moodys, MSCI, Thomson Reuters, and ZoomInfo and that grounding with third-party datasets will be available in Q3 this year. This is one of several new features that Google is developing to encourage organizations to adopt its enterprise-ready generative AI experiences by reducing how often models spit out misleading or inaccurate information.

Another is high-fidelity mode, which enables organizations to source information for generated outputs from their own corporate datasets instead of Geminis wider knowledge bank. High-fidelity mode is powered by a specialized version of Gemini 1.5 Flash and is available now in preview via Vertex AIs Experiments tool.

Vector Search, which allows users to find images by referencing similar graphics, is also being expanded to support hybrid search. The update is available in public preview and allows those vector-based searches to be paired with text-based keyword searches to improve accuracy. Grounding with Google Search will soon also provide a dynamic retrieval feature that automatically selects if information should be sourced from Geminis established datasets or Google Search for prompts that may require frequently updated resources.

Read the rest here:

Google touts enterprise-ready AI with more facts and less make-believe - The Verge

Integrating Artificial Intelligence and Machine Learning in the Marine Corps – War On The Rocks

Every day, thousands of marines perform routine data-collection tasks and make hundreds of data-based decisions. They compile manning data on whiteboards to decide to staff units, screenshot weather forecasts and paste them into weekly commanders update briefings, and submit training entries by hand. But anyone who has used ChatGPT or other large-scale data analytic services in the last two years knows the immense power of generative AI to streamline these processes and improve the quality of these decisions by basing them on fresh and comprehensive data.

The U.S. Marine Corps has finally caught wind. Gen. Eric Smiths new message calls for the service to recognize that [t]echnology has exponentially increased informations effects on the modern battlefield, making our need to exploit data more important than ever. The services stand-in forces operating concept relies on marine operating forces to integrate into networks of sensors, using automation and machine learning to simplify decision processes and kill chains. Forces deployed forward in littoral environments will be sustained by a supply system that uses data analysis for predictive maintenance, identifying which repair parts the force will need in advance.

However, there is a long way to go before these projections become reality. A series of interviews with key personnel in the Marine Corps operating forces and supporting establishment, other services, and combatant commands over the past six months reveal that the service needs to move more quickly if it intends to use AI and machine learning to execute this operating concept. Despite efforts from senior leaders to nudge the service towards integrating AI and machine learning, only incremental progress has been made.

The service depends on marines possessing the technical skills to make data legible to automated analytic systems and enable data-informed decisions. Designating a Marine expeditionary force or one of its major subordinate commands as the lead for data analysis and literacy would unify the services two-track approach by creating an ecosystem that will allow bottom-up creativity, scale innovation across the force, and speed the integration of these technologies into the fleet and supporting establishment.

New Technologys Potential to Transform Operations, Logistics, and Education

AI, machine learning, and data analysis can potentially transform military education, planning, and operations. Experiments at Marine Corps University have shown that they could allow students to hone operational art in educational settings by probing new dimensions of complicated problems and understanding the adversarys system. AI models, trained on enemy doctrinal publications and open-source information about troop employment, can use probabilistic reasoning to predict an enemys response. This capability could supplement intelligence red teams by independently analyzing the adversarys options, improve a staffs capacity for operational planning, or simply give students valuable analytic experience. And NIPRGPT, a new Air Force project, promises to upend mundane staff work by generating documents and emails in a secure environment.

Beyond education and planning, AI and machine learning can transform how the Marine Corps fights. During an operation, AI could employ a networked collection of manned and unmanned systems to reconnoiter and attack an adversary. It could also synthesize and display data from sensor networks more quickly than human analysts or sift through thousands of images to identify particular scenes or locations of interest. Either algorithms can decide themselves or enable commanders to make data-informed decisions in previously unthinkable ways. From AI-enabled decision-making to enhanced situational awareness, this technology has the potential to revolutionize military operations. A team of think tank researchers even used AI recently to rethink the Unified Command Plan.

But, achieving these futuristic visions will require the service to develop technical skills and familiarity with this technology before implementing it. Developing data literacy is a prerequisite to effectively employ advanced systems, and so this skill is as important as anything else the service expects of marines. Before the Marine Corps can use AI-enabled swarms of drones to take a beachhead or use predictive maintenance to streamline supply operations, its workforce needs to know how to work with data analysis tools and be comfortable applying them in everyday work settings.

Delivering for the Marine Corps Today

If the Marine Corps wants to employ machine learning and AI in combat, it should teach marines how to use them in stable and predictable garrison operations. Doing so could save the service tens of thousands of hours annually while increasing combat effectiveness and readiness by replacing the antiquated processes and systems the fleet marine force relies on.

The operating forces are awash with legible data that can be used for analysis. Every unit has records of serialized equipment, weapons, and classified information. Most of these records are maintained in antiquated computer-based programs of record or Excel spreadsheets, offering clear opportunities for optimization.

Furthermore, all marines in the fleet do yearly training and readiness tasks to demonstrate competence in their assigned functions. Nothing happens to this data once submitted in the Marine Corps Training Information Management System no headquarters echelon traces performance over time to ensure that marines are improving, besides an occasional cursory glance during a Commanding Generals Inspection visit. This system is labor intensive, requiring manual entries for each training event and each individual marines results.

Establishing and analyzing performance standards from these events could identify which units have the most effective training regimens. Leaders who outperform could be rewarded, and a Marine expeditionary force could establish best practices across its subordinate units to improve combat readiness. Automating or streamlining data entry and analysis would be straightforward since AI excels at performing repetitive tasks with clear parameters. Doing so would save time while increasing the combat proficiency of the operating forces.

Marines in the operating forces perform innumerable routine tasks that could be easily automated. For example, marines in staff sections grab data and format it into weekly command and staff briefings each week. Intelligence officers retrieve weather forecast data from their higher headquarters. Supply officers insert information supply levels into the brief. Medical and dental readiness numbers are usually displayed in a green/yellow/red stoplight chart. This data is compiled by hand in PowerPoint slide decks. These simple tasks could be automated, saving thousands of hours across an entire Marine expeditionary force. Commanders would benefit by making decisions based on the most up-to-date information rather than relying on stale data captured hours before.

The Marine Corps uses outdated processes and systems that waste valuable time that could be used on training and readiness. Using automation, machine learning, and AI to streamline routine tasks and allow commanders to make decisions based on up-to-date data will enable the service to achieve efficiency savings while increasing its combat effectiveness. In Smiths words, combining human talent and advanced processes [will allow the Marine Corps] to become even more lethal in support of the joint force and our allies and partners.

The Current Marine Corps Approach

The service is slow in moving towards its goals because it has decided, de facto, to pursue a two-track development strategy. It has concentrated efforts and resources at the highest echelons of the institution while relying on the rare confluence of expertise and individual initiative for progress at the lowest levels. This bifurcated approach lacks coherence and stymies progress.

Marine Corps Order 5231.4 outlines the services approach to AI. Rather than making the operating forces the focus of effort, the order weights efforts in the supporting establishment. The supporting establishment has the expertise, resources, and authority to manage a program across the Marine Corps. But it lacks visibility into the specific issues facing individuals that could be solved with AI, machine learning, or automated data analysis.

At the tactical levels of the service, individuals are integrating these tools into their workflows. However, without broader sponsorship, this mainly occurs as the result of happy coincidence: when a single person has the technical skills to develop an automated data solution, recognizes a shortfall, and takes the initiative to implement it. Because the skills required to create, maintain, or customize projects for a unit are uncommon, scaling adoption or expanding the project is difficult. As a result, most individual projects wither on the vine, and machine learning, AI, and data analysis have only sporadically and temporarily penetrated the operating forces.

This two-track approach separates resources and problems. This means that the highest level of service isnt directly involved in success at the tactical level. Tactical echelons dont have the time, resources, or tasking to develop and systematize these skill sets on their own. Whats needed is a flat and collaborative bottom-up approach with central coordination.

The 18th Airborne Corps

Marine Corps doctrine and culture advocate carefully balancing centralized planning with decentralized execution and bottom-up refinement. Higher echelons pass flexible instructions to their subordinates, increasing specificity at each level. Leaders ensure standardization of training, uniformity of effort, and efficient use of resources. Bottom-up experimentation applies new ideas to concrete problems.

Machine learning and data analysis should be no different. The challenge is finding a way to link individual innovation instances with the resources and influence to scale them across the institution. The Armys use of the 18th Airborne Corps to bridge the gap between service-level programs and individual initiatives offers a clear example for how to do so.

The 18th Airborne Corps fills a contingency-response role like the Marine Corps. Located at Fort Liberty, it is the headquarters element containing the 101st and 82nd Airborne Divisions, along with the 10th Mountain and 3rd Infantry Divisions. As part of a broader modernization program, the 18th Airborne Corps has focused on creating a technology ecosystem to foster innovation. Individual soldiers across the corps can build personal applications that aggregate, analyze, and present information in customizable dashboards that streamline work processes and allow for data-informed decision-making.

For example, soldiers from the 82nd Airborne Division created a single application to monitor and perform logistics tasks. The 18th Airborne Corps Data Warfare Company built a tool for real-time monitoring of in-theater supply levels with alerts for when certain classes of supply run low. Furthermore, the command integrates these projects and other data applications to streamline combat functions. For example, the 18th Airborne Corps practices integrating intelligence analysis, target acquisition, and fires through joint exercises like Scarlet Dragon.

As well as streamlining operational workflows, the data analytics improve training and readiness. The 18th Airborne Corps has developed a Warrior Skills training program in which they collect data to establish a baseline against which it can compare individual soldiers skills over time. Finally, some of the barracks at Fort Liberty have embedded QR codes that soldiers scan to check in when theyre on duty.

These examples demonstrate how a unit of data-literate individuals can leverage modern technology to increase the capacity of the entire organization. Many of these projects could not have been scaled beyond institutional boundaries without corps-level sponsorship. Furthermore, because the 18th Airborne Corps is an operational-level command, it connects soldiers in its divisions with the Armys service-level stakeholders.

Designating a Major Command as Service Lead

If the Marine Corps followed the 18th Airborne Corps model, it would designate one operating force unit as the service lead for data analysis and automation to link service headquarters with tactical units. Institutionalizing security systems, establishing boundaries for experimentation, expanding successful projects across a Marine expeditionary force, and implementing a standardized training program would create an ecosystem to cultivate the technical advances service leaders want.

This proposed force would also streamline the interactions between marines and the service and ensure manning continuity for units that develop data systems to ensure efforts do not peter out as individuals rotate to new assignments. Because of its geographic proximity to Fort Liberty, and as 2d Marine Division artillery units have already participated in the recent Scarlet Dragon exercises and thus have some familiarity with the 18th Airborne Corps projects, II Marine Expeditionary Force is a logical choice to serve as the service lead.

Once designated, II Marine Expeditionary Force should establish an office, directorate, or company responsible for the entire forces data literacy and automation effort. This would follow the 18th Airborne Corps model of establishing a data warfare company to house soldiers with specialized technical skills. This unit could then develop a training program to be implemented across the Marine expeditionary force. The focus of this effort would be a rank-and-billet appropriate education plan that teaches every marine in the Marine expeditionary force how to read, work with, communicate, and analyze data using low- or no-code applications like PowerBI or the Armys Vantage system, with crucial billets learning how to build and maintain these applications. Using the work it is undertaking with Training and Education Command, combined with its members academic and industry expertise, the Marine Innovation Unit (of which I am a member) could develop a training plan based on the Armys model that II Marine Expeditionary Force could use and would work alongside the proposed office to create and implement this training plan.

This training plan will teach every marine the rudimentary skills necessary to implement simple solutions for themselves. The coordinating office will centralize overhead, standardize training, and scale valuable projects across the whole Marine expeditionary force. It would link the high-level service efforts with the small-scale problems facing the operating forces that data literacy and automation could fix.

All the individuals interviewed agreed that engaged and supportive leadership has been an essential precondition for all successful data automation projects. Service-level tasking should ensure that all subordinate commanders take the initiative seriously. Once lower-echelon units see the hours of work spent on rote and mundane tasks that could be automated and then invested back into training and readiness, bureaucratic politics will melt away, and implementation should follow. The key is for a leader to structure the incentives for subordinates to encourage the first generation of adopters.

Forcing deploying units to perform another training requirement could overburden them. However, implementing this training carefully would ensure it is manageable. The Marine expeditionary force and its subordinate units headquarters are not on deployment rotations, so additional training would not detract from their pre-deployment readiness process. Also, implementing these technologies would create significant time savings, freeing up extra time and manpower for training and readiness tasks.

Conclusion

Senior leaders across the Department of Defense and Marine Corps have stated that AI and machine learning are the way forward for the future force. The efficiency loss created by the services current analog processes and static data (let alone the risk to mission and risk to force associated with these antiquated processes in a combat environment) is enough reason to adopt this approach. However, discussions with currently serving practitioners reveal that the Marine Corps needs to move more quickly. It has pursued a two-track model with innovation at the lowest levels and resources at the highest. Bridging the gap between these parallel efforts will be critical to meaningful progress.

If the Marine Corps intends to incorporate AI and machine learning into its deployed operations, it should build the groundwork by training its workforce and building familiarity during garrison operations. Once marines are familiar with and able to employ these tools in a stable and predictable environment, they will naturally use them when deployed to a hostile littoral zone. Designating one major command to act as the service lead would go a long way toward accomplishing that goal. This proposed command would follow the 18th Airborne Corps model of linking the strategic and tactical echelons of the force and implementing new and innovative ways of automating day-to-day tasks and data analysis. Doing so will streamline garrison operations and improve readiness.

Will McGee is an officer in the U.S. Marine Corps Reserves, currently serving with the Marine Innovation Unit. The views in this article are the authors and do not represent those of the Marine Innovation Unit, the U.S. Marine Corps, the Defense Department, or any part of the U.S. government.

Image: Midjourney

Read more here:

Integrating Artificial Intelligence and Machine Learning in the Marine Corps - War On The Rocks

Goldman Sachs: Is there "too much spend, too little benefit" in AI craze? By Investing.com – Investing.com

Investing.com --Tech giants and other firms are set to spend roughly $1 trillion in the coming years on developing their artificial intelligence capabilities, including investments in data centers, chips and other AI-related infrastructure, according to analysts at Goldman Sachs.

But they argued that these expenditures have so far failed to yield much "beyond reports of efficiency gains" among AI developers, while Nvidia (NASDAQ:) -- the Wall Street darling and focal point of the craze around the nascent technology -- has seen its shares "sharply correct."

To explore whether heavy corporate spending on AI will deliver meaningful "benefits and returns," the investment bank spoke with a series of experts, including Daron Acemoglu, a professor at the Massachusetts Institute of Technology who specializes in economics.

AI spending's potential impact on productivity

Acemoglu took a largely skeptical stance on the outcome of the capital rush, estimating that only a quarter of AI-related actions will be "cost-effective to automate" within 10 years -- implying that AI will effect less than 5% of all tasks.

"Over this [10-year] horizon, AI technology will [...] primarily increase the efficiency of existing production processes by automating certain tasks or by making workers who perform these tasks more productive," Acemoglu told Goldman Sachs. "So, estimating the gains in productivity and growth from AI technology on a shorter horizon depends wholly on the number of production processes that the technology will impact and the degree to which this technology increases productivity or reduces costs over this timeframe."

However, he predicted that there will not be a "massive" number of tasks that will be impacted by AI in the near term, adding that most actions humans currently perform -- such as manufacturing or mining -- are "multifaceted and require real-world interaction." Instead, Acemoglu said he expects AI will have the biggest influence in the coming years on "pure mental tasks," adding that while the amount of these actions will be "non-trivial" it will not be "huge."

Ultimately, Acemoglu forecast that AI will increase U.S. productivity by only 0.5% and bolster overall economic growth by 0.9% over the next decade.

AI's "limitations"

Acemoglu added he was "less convinced" that Big Tech's plans to greatly increase the amount of data and processing power they plug into AI models will lead to faster improvements of these systems.

"Including twice as much data from [social media platform] Reddit into the next version of [OpenAI's chatbot] [ChatGPT] may improve its ability to predict the next word when engaging in an informal conversation, but it won't necessarily improve a customer service representatives ability to help a customer troubleshoot problems with their video service," he said.

The quality of data is also crucial, Acemoglu noted, flagging that it remains unclear what will be the major sources of high-end information or whether it can be obtained "easily and cheaply."

Finally, he warned that the current architecture of AI technology itself "may have limitations."

"Human cognition involves many types of cognitive processes, sensory inputs, and reasoning capabilities. Large language models (LLMs) today have proven more impressive than many people would have predicted, but a big leap of faith is still required to believe that the architecture of predicting the next word in a sentence will achieve capabilities as smart as HAL 9000 in '2001: A Space Odyssey,'" Acemoglu said, referring to the fictional artificial intelligence character in a popular 1960s science fiction film.

An AI "bubble" or a "promising" spending cycle?

The Goldman Sachs analysts assumed a mixed approach to the crush of spending on AI, with some saying the technology has yet to show it can perform the complex problems needed to justify the elevated expenditures.

These researchers also said they do not anticipate that AI costs will ever decline to such an extent that it will be affordable for companies to automate a large portion of tasks. Fundamentally, they said the AI story that has driven an uptick in the benchmark so far this year is "unlikely to hold up."

Despite these concerns, other Goldman Sachs analysts took a more optimistic stance, forecasting that AI could lead to the automation of a quarter of all work actions. The current uptick in capital expenditures, they argued, seems "more promising" than prior spending cycles because "incumbents with low costs of capital and massive distribution networks and customer bases are leading it." They also predicted that U.S. productivity would improve by 9% and economic activity would grow by 6.1% cumulatively in the next decade thanks to AI advancements.

Overall, however, the Goldman Sachs analysts concluded that there is "still room for the AI theme to run, either because AI starts to deliver on its promise, or because bubbles take a long time to burst."

Read this article:

Goldman Sachs: Is there "too much spend, too little benefit" in AI craze? By Investing.com - Investing.com

Meta starts testing user-created AI chatbots on Instagram – TechCrunch

Meta CEO Mark Zuckerberg announced on Thursday that the company will begin to surface AI characters made by creators through Meta AI studio on Instagram. The tests will begin in the U.S.

The social media companys announcement comes on the same day as a16z-backed chatbot company Character.AI is rolling out the ability for users to talk with AI avatars over a call.

In a post on his broadcast channel, Zuckerberg noted that these chatbots will be clearly marked as AI so users are aware.

Rolling out an early test in the U.S. of our AI studio so you might start seeing AIs from your favorite creators and interest-based AIs in the coming weeks on Instagram. These will primarily show up in messaging for now, and will be clearly labeled as AI, he said.

Its early days and the first beta version of these AIs, so well keep working on improving them and make them available to more people soon, Zuckerberg added.

Zuckerberg noted that it worked with creators like the meme account Wasted and technology creator Don Allen Stevenson III to roll out early versions of creator-made chatbots.

In an interview Zuckerberg shared on his social channels, the CEO expanded on the use cases for AI avatars, saying, There needs to be a lot of different APIs that get created to reflect peoples different interests. So a big part of the approach is going to be enabling every creator, and then eventually also every small business on the platform, to create an AI for themselves to help them interact with their community, and their customers if theyre a business, he added.

Creators may also want to use AIs to engage with fans, as they dont currently have time to respond to all the incoming messages.

Still, he admitted that how good the AI avatars will ultimately be is going to become something of an art form that evolves and improves over time.

I dont think we know going into this, what is going to be the most engaging and entertaining and trust-building formula for this, Zuckerberg noted. So we want to give people tools so that you can experiment with this and see what ends up working well, he said.

Meta will initially begin testing the feature with around 50 creators and a small percentage of users, and will then roll it out to more people over the next couple of months, with the hope of having it fully launched by August.

Meta first announced its AI studio last year at its developer conference to let businesses build custom chatbots.

Additional reporting: Sarah Perez

Originally posted here:

Meta starts testing user-created AI chatbots on Instagram - TechCrunch

Accelerating the next wave of generative AI startups | Amazon Web Services – AWS Blog

Since day one, AWS has helped startups bring their ideas to life by democratizing access to the technology powering some of the largest enterprises around the world including Amazon. Each year since 2020, we have provided startups nearly $1 billion in AWS Promotional Credits. Its no coincidence then that 80% of the worlds unicorns use AWS. I am lucky to have had a front row seat to the development of so many of these startups over my time at AWScompanies like Netflix, Wiz, and Airtasker. And Im enthusiastic about the rapid pace at which startups are adopting generative artificial intelligence (AI) and how this technology is creating an entirely new generation of startups. A staggering 96% of AI/ML unicorns run on AWS.

These generative AI startups have the ability to transform industries and shape the future, which is why today we announced a commitment of $230 million to accelerate the creation of generative AI applications by startups around the world. We are excited to collaborate with visionary startups, nurture their growth, and unlock new possibilities. In addition to this monetary investment, today were also announcing the second-annual AWS Generative AI Accelerator in partnership with NVIDIA. This global 10-week hybrid program is designed to propel the next wave of generative AI startups. This year, were expanding the program 4x to serve 80 startups globally. Selected participants will each receive up to $1 million in AWS Promotional Credits to fuel their development and scaling needs. The program also provides go-to-market support as well as business and technical mentorship. Participants will tap into a network that includes domain experts from AWS as well as key AWS partners such as NVIDIA, Meta, Mistral AI, and venture capital firms investing in generative AI.

In addition to these programs, AWS is committed to making it possible for startups of all sizes and developers of all skill levels to build and scale generative AI applications with the most comprehensive set of capabilities across the three layers of the generative AI stack. At the bottom layer of the stack, we provide infrastructure to train large language models (LLMs) and foundation models (FMs) and produce inferences or predictions. This includes the best NVIDIA GPUs and GPU-optimized software, custom, machine learning (ML) chips including AWS Trainium and AWS Inferentia, as well as Amazon SageMaker, which greatly simplifies the ML development process. In the middle layer, Amazon Bedrock makes it easier for startups to build secure, customized, and responsible generative AI applications using LLMs and other FMs from leading AI companies. And at the top layer of the stack, we have Amazon Q, the most capable generative AI-powered assistant for accelerating software development and leveraging companies internal data.

Customers are innovating using technologies across the stack. For instance, during my time at the VivaTech conference in Paris last month, I sat down Michael Chen, VP of Strategic Alliances at PolyAI, which offers customized voice AI solutions for enterprises. PolyAI develops natural-sounding text-to-speech models using Amazon SageMaker. And they build on Amazon Bedrock to ensure responsible and ethical AI practices. They use Amazon Connect to integrate their voice AI into customer service operations.

At the bottom layer of the stack, NinjaTech uses Trainium and Inferentia2 chips, along with Amazon SageMaker, to build, train, and scale custom AI agents. From conducting research to scheduling meetings, these AI agents save time and money for NinjaTechs users by bringing the power of generative AI into their everyday workflows. I recently sat down with Sam Naghshineh, Co-founder and CTO, to discuss how this approach enables them to save time and resources for their users.

Leonardo.AI, a startup from the 2023 AWS Generative AI Accelerator cohort, is also harnessing the capabilities of AWS Inferentia2 to enable artists and professionals to produce high-quality visual assets with unmatched speed and consistency. By reducing their inference costs without sacrificing performance, Leonardo.AI can offer their most advanced generative AI features at a more accessible price point.

Leading generative AI startups, including Perplexity, Hugging Face, AI21 Labs, Articul8, Luma AI, Hippocratic AI, Recursal AI, and DatologyAI are building, training, and deploying their models on Amazon SageMaker. For instance, Hugging Face used Amazon SageMaker HyperPod, a feature that accelerates training by up to 40%, to create new open-source FMs. The automated job recovery feature helps minimize disruptions during the FM training process, saving them hundreds of hours of training time a year.

At the middle layer, Perplexity leverages Amazon Bedrock with Anthropic Claude 3 to build their AI-powered search engine. Bedrock ensures robust data protection, ethical alignment through content filtering, and scalable deployment of Claude 3. While Nexxiot, an innovator in transportation and supply chain solutions, quickly moved its Scope AI assistant solution to Amazon Bedrock with Anthropic Claude in order to give their customers the best real-time, conversational insights into their transport assets.

At the top layer, Amazon Q Developer helps developers at startups build, test, and deploy applications faster and more efficiently, allowing them to focus their valuable energy on driving innovation. Ancileo, an insurance SaaS provider for insurers, re-insurers, brokers, and affinity partners, uses Amazon Q Developer to reduce the time to resolve coding-related issues by 30%, and is integrating ticketing and documentation with Amazon Q to speed up onboarding and allow anyone in the company to quickly find their answers. Amazon Q Business enables everyone at a startup to be more data-driven and make better, faster decisions using the organizations collective knowledge. Brightcove, a leading provider of cloud video services, deployed Amazon Q Business to streamline their customer support workflow, allowing the team to expedite responses, provide more personalized service, and ultimately enhance the customer experience.

The future of generative AI belongs to those who act now. The application window for the AWS Generative AI Accelerator program is open from June 13 to July 19, 2024, and well be selecting a global cohort of the most promising generative AI startups. Dont miss this unique chance to redefine whats possible with generative AI, and apply now!

Other helpful resources include:

Apply now, explore the resources, and join the generative AI revolution with AWS.

Twitch series: Lets Ship It with AWS! Generative AI

AWS Generative AI Accelerator Program: Apply now

See the original post:

Accelerating the next wave of generative AI startups | Amazon Web Services - AWS Blog

Reduce AI Hallucinations With This Neat Software Trick – WIRED

To start off, not all RAGs are of the same caliber. The accuracy of the content in the custom database is critical for solid outputs, but that isnt the only variable. It's not just the quality of the content itself, says Joel Hron, a global head of AI at Thomson Reuters. It's the quality of the search, and retrieval of the right content based on the question. Mastering each step in the process is critical since one misstep can throw the model completely off.

Any lawyer who's ever tried to use a natural language search within one of the research engines will see that there are often instances where semantic similarity leads you to completely irrelevant materials, says Daniel Ho, a Stanford professor and senior fellow at the Institute for Human-Centered AI. Hos research into AI legal tools that rely on RAG found a higher rate of mistakes in outputs than the companies building the models found.

Which brings us to the thorniest question in the discussion: How do you define hallucinations within a RAG implementation? Is it only when the chatbot generates a citation-less output and makes up information? Is it also when the tool may overlook relevant data or misinterpret aspects of a citation?

According to Lewis, hallucinations in a RAG system boil down to whether the output is consistent with whats found by the model during data retrieval. Though, the Stanford research into AI tools for lawyers broadens this definition a bit by examining whether the output is grounded in the provided data as well as whether its factually correcta high bar for legal professionals who are often parsing complicated cases and navigating complex hierarchies of precedent.

While a RAG system attuned to legal issues is clearly better at answering questions on case law than OpenAIs ChatGPT or Googles Gemini, it can still overlook the finer details and make random mistakes. All of the AI experts I spoke with emphasized the continued need for thoughtful, human interaction throughout the process to double check citations and verify the overall accuracy of the results.

Law is an area where theres a lot of activity around RAG-based AI tools, but the processs potential is not limited to a single white-collar job. Take any profession or any business. You need to get answers that are anchored on real documents, says Arredondo. So, I think RAG is going to become the staple that is used across basically every professional application, at least in the near to mid-term. Risk-averse executives seem excited about the prospect of using AI tools to better understand their proprietary data without having to upload sensitive info to a standard, public chatbot.

Its critical, though, for users to understand the limitations of these tools, and for AI-focused companies to refrain from overpromising the accuracy of their answers. Anyone using an AI tool should still avoid trusting the output entirely, and they should approach its answers with a healthy sense of skepticism even if the answer is improved through RAG.

Hallucinations are here to stay, says Ho. We do not yet have ready ways to really eliminate hallucinations. Even when RAG reduces the prevalence of errors, human judgment reigns paramount. And thats no lie.

See the rest here:

Reduce AI Hallucinations With This Neat Software Trick - WIRED

Early bets on AI have helped this global tech fund outperform for a second year – CNBC

An early bet on artificial intelligence thanks to a simple investment framework is helping the T. Rowe Price's Global Technology Fund (PRGTX) outperform the market for a second straight year. "AI has to be the biggest productivity enhancer for technology, for the economy since electricity" said the fund's portfolio manager, Dominic Rizzo. "Our framework led us to be early to this AI trend, [and] led us to be early to the chip intensity of this AI trend." The fund has jumped more than 26% in 2024 after surging nearly 56% in 2023. That's due in part to its winning bets in AI names and semiconductor stocks many of which have outperformed the S & P 500 and Nasdaq Composite . Rizzo, a T. Rowe Price lifer who took the helm of the fund in 2022, attributes a four-step investing framework to PRGTX's success. The first pillar is what he calls "linchpin technologies" critical to a company's success. This includes artificial intelligence for semiconductor companies such as Nvidia. The fund manager also looks for innovation in secular growth markets, or companies that are taking market share in fast-growing markets and more quickly than competitors. Another factor is improving fundamentals, as measured through improved free cash flow or operating margin expansion, and reasonable valuation. "The way you get burned in tech is if you buy either extremely expensive stocks, or often if you buy extremely cheap stocks," he said. "That's because in tech, extremely cheap stocks are often cheap for a reason, [while] the extremely expensive stocks are too expensive to earn an outsize return." Early bets on AI Key to the $4.5 billion fund's recent success has been early positions in AI stocks and chipmaking darling Nvidia . The AI leader added to the fund at the end of 2021 today accounts for nearly 18% of the portfolio. Shares are up 166% since the start of 2024. "Nvidia is clearly the linchpin of AI," Rizzo said. "They've done such a tremendous job building up all the different pieces that you need, whether it's the central processing units, the graphics processing units, the networking technology, the software ecosystem. They really have all the different pieces necessary." But the AI darling is far from Rizzo's only semiconductor commitment on the AI theme. Taiwan Semiconductor Manufacturing and Advanced Micro Devices make up 5% and 4% of the portfolio, respectively. Chip equipment maker ASML Holding and semiconductor maker Analog Devices also make the fund's top 10 holdings, at about a combined 5%. Beyond chipmakers, Rizzo's also made significant bets on Apple and Microsoft , which account for 12% and about 10% of the portfolio, respectively. The pair are the fund's second and third largest holdings, behind Nvidia. Rizzo highlighted Microsoft's enterprise software leadership and Apple's consumer dominance. Together, both stocks also lend stability to the portfolio and trade at reasonable valuations with compound earnings growth potential, he added. For Apple, Rizzo touted healthy growth in the company's services business and smartphone growth in emerging markets as potential catalysts for the stock. More critical still is Apple's AI vision, which the MBA from the University of Chicago Booth School of Business expects to fuel a smartphone upgrade cycle. Apple unveiled its long-awaited AI plan at its Worldwide Developers Conference this week, calling it Apple Intelligence . Features include upgrades to the Siri digital assistant that integrate ChatGPT. Microsoft appeal For Microsoft, Rizzo highlighted the company's partnership with OpenAI that strengthens its AI prospects as well as the unique position of its Azure cloud computing business. Rizzo also views software holdings such as SAP and ServiceNow as next-stage beneficiaries of AI tailwinds, and well positioned to benefit from the data needed for AI. "We had the right framework, and the right investing style for this type of market," Rizzo said. "I hope that the investment framework will prove itself to continue to work, regardless of the market environment." The fund has a $2,500 minimum investment, charges a 0.94% net expense ratio and is rated two stars by Morningstar, which said in a report late last year that the Global Technology Fund is "off to a good start" and "has some promise but has much to prove" after the revamp that brought Rizzo onboard in 2022.

See the original post:

Early bets on AI have helped this global tech fund outperform for a second year - CNBC

How Pope Francis became the AI ethicist for world leaders and tech titans – The Washington Post

BARI, Italy Pope Francis is an octogenarian who says he cannot use a computer, but on a February afternoon in 2019, a top diplomat of American Big Tech entered the papal residence seeking guidance on the ethics of a gestating technology: artificial intelligence.

Microsoft President Brad Smith and the pope discussed the rapid development of the technology, Smith recounted in an interview with The Washington Post, and Francis appeared to grasp its risks. As Smith departed, the pope uttered a warning. Keep your humanity, he urged, as he held Smiths wrist.

In the five years since that meeting, AI has become unavoidable as the pope himself found out last year when viral images of him in a Balenciaga puffer jacket heralded a new era of deepfakes. And as the technology has proliferated, the Vatican has positioned itself as the conscience of companies like Microsoft and emerged as a surprisingly influential voice in the debate over AIs global governance.

In southern Italy on Friday, Francis became the first pope to address a Group of Seven forum of world leaders, delivering a moral treatise on the cognitive-industrial revolution represented by AI, as he sought to elevate the topic in the same manner he did climate change.

President Biden greeted Pope Francis on June 14 at the Group of Seven roundtable in Fasano, Italy. (Video: Reuters)

In a sweeping speech, the pope sketched out the ramifications of a technology as fascinating as it is terrifying, saying it could change the way we conceive of our identity as human beings. He decried how AI could cement the dominance of Western culture and diminish human dignity.

AI, he said, stood as a tool that could democratize knowledge, exponentially advance science and alleviate the human condition as people give arduous work to machines. But he warned that it also has the power to destroy and called for an urgent ban on lethal autonomous weapons. As a ghost of the future, he referenced the 1907 dystopian novel Lord of the World, in which technology replaces religion and faith in God.

No machine should ever choose to take the life of a human being, the pope said.

Stories to keep you informed

He has previously insisted that AIs risks must be managed through a global treaty, and on Friday he endorsed the need for a set of uniting global principles to guide AIs development.

The Rome Call for AI Ethics a document that counted the Vatican, Microsoft and IBM among its original signatories in 2020 is emerging as a gold standard of best AI practices. It has informed G-7 discussions about developing a code of conduct. And on Friday, the G-7 leaders with the Vaticans support announced that they would create a badge of honor of sorts: a new label for companies that agree to safely and ethically develop AI tools and follow guidelines for the voluntary reporting and monitoring of risks. Echoing Vatican concerns, leaders additionally called for responsible military uses of AI.

The AI issue has provided an opening for the church, diminished by its handling of clerical sex abuse scandals, to reassert its moral authority. Microsoft and at least some other tech companies appear eager for the churchs seal of approval, as the industry grapples with the public-relations challenges of a technology that could automate jobs, amplify misinformation and create new cybersecurity risks.

The Vatican has earned a seat at the Big Tech table. An ancient institution with a mixed track record on science see the trial of Galileo is now dispatching representatives to major tech events.

The Rev. Paolo Benanti the Vaticans leading AI expert, a Franciscan priest and a trained engineer credited with coining the term algorethics last year secured a spot on the United Nations Advisory Body on Artificial Intelligence and has become a major player in the crafting of a national AI policy for Italy, a G-7 nation. At the Vaticans request, IBM hosted a global summit of colleges at the University of Notre Dame to bring AI ethics to the forefront of curriculums.

The Vaticans views have influenced concrete business decisions. Microsofts Smith told The Post: We developed our own technology that would allow anyone with just a few seconds of anyones voice to be able to replicate it. And we chose not to release that. The Rome principles, he added, are definitely part of what has helped us at Microsoft strive to take a broad-minded approach to the development of AI, including within our own four walls. I just think its provided a broad humanistic and intellectual frame.

The pledges emphasis on inclusion also influenced the companys decision to launch a fellowship that brings together researchers and civil society leaders largely from the Global South to evaluate the impact of the technology, said Natasha Crampton, Microsofts chief responsible AI officer. Fellows have helped the company develop multilingual evaluations of AI models and ensured that the company understands local context and cultural norms as it develops new products.

Not all companies are on board with the Rome principles. Some have forged ahead with AI-manipulated audio that researchers warn could be abused to dupe voters ahead of elections.

Not everyone has been allowed to join the Rome club, either. The Chinese company Huawei asked, said Archbishop Vincenzo Paglia, president of the Pontifical Academy for Life. And we said no, because we dont really know what the [people in charge there] think.

In the meantime, the Vatican remains concerned about the misuse of open-source AI. The technology could produce major benefits in health care and education, Benanti said. But it can also multiply a lot of bad elements in society, and we cannot spread AI everywhere without any political decision-making, because tomorrow we could wake up with a multiplier of inequality, of bioweapons, he said.

Vatican officials have already sounded alarms over what they view as potentially unethical uses, including the facial recognition systems deployed in the 2019-2020 crackdown on protesters in Hong Kong, as well as algorithms for refugee processing such as those in Germany, where AI-fueled linguistic tests have been used to establish whether asylum seekers are lying about their place of origin.

The relationship between the Vatican and AI innovators had its genesis in a 2018 speech that Benanti delivered on AI ethics. A senior Microsoft representative in Italy had been in the audience, and the two began meeting regularly. They brought in Paglia, who was interested in broadening the remit of his academy beyond core issues such as the ethics of stem cell research.

Ahead of Smiths visit with the pope, Paglia escorted him through Michelangelos Last Judgement in the Sistine Chapel, and showed him renderings by Galileo of the Earth revolving around the sun the theory that landed him under house arrest for life after a church trial.

Yet the Vaticans relationship with science hasnt always been as a Luddite. In the Middle Ages, Catholic scholars seeded Europe with what would become some of its greatest universities. And although targeted by some individual clerics, Darwins theory of evolution was never officially challenged by the Vatican.

The church officially declares that faith and reason are not in conflict.

The Bible doesnt tell us how heaven works, but how to get there, said Paglia, quoting Galileo. The archbishop has made official trips to Microsofts headquarters near Seattle and IBM offices in New York.

Through aggressive AI investments, Microsoft has become the worlds most valuable company, worth more than $3 trillion. But its continued success hinges on curbing negative perceptions of AI. Worries that the tech could displace jobs, exacerbate inequalities, supercharge surveillance and usher in new kinds of warfare are prompting governments around the world to consider stringent regulations that could blunt the companys ambitions.

The European Union is readying a landmark law that could limit more-advanced generative AI models. The Federal Trade Commission is investigating a deal that Microsoft made with the AI start-up Inflection, probing whether the tech giant deliberately set up the investment to avoid a merger review. And U.S. enforcers reached a deal that will open the company to greater scrutiny of how it wields power to dominate artificial intelligence, including its multibillion-dollar investments in ChatGPT maker OpenAI. That relationship has also exposed Microsoft to new reputational risks, as OpenAI chief executive Sam Altman frequently invites controversy.

Under Smiths leadership, Microsoft has built one of the most sophisticated global lobbying organizations to defuse its regulatory challenges and try to convince people that it is the tech titan the world can trust to build AI. Smith regularly meets with heads of state, including appearing last month alongside President Biden at a factory opening. To be an effective business, Microsoft has to find ways to work with governments and to ensure its technology can transcend them, Smith said.

The worlds oldest global organization can be a unique teacher and partner in that effort, he said, referring to the Vatican. Catholicism and other religions arent bound by national borders much like the applications Microsoft is peddling globally.

At one level, you might look at the two of us and think were odd bedfellows, Smith said. But on the other hand, its a perfect combination.

Zakrzewski reported from Washington.

Continue reading here:

How Pope Francis became the AI ethicist for world leaders and tech titans - The Washington Post