Page 79«..1020..78798081..90100..»

Nvidia’s Jensen Huang plays down competition worries as key supplier disappoints with subdued expectations for AI … – Fortune

Nvidia will remain the gold standard for AI training chips, CEO Jensen Huang told investors, even as rivals push to cut into his market share and one of Nvidias major suppliers gave a subdued forecast for AI chip sales.

Everyone from OpenAI to Elon Musks Tesla rely on Nvidia semiconductors to run their large language or computer vision models. The rollout of Nvidias Blackwell system later this year will only cement that lead, Huang said at the companys annual shareholder meeting on Wednesday.

Unveiled in March, Blackwell is the next generation of AI training processors to follow its flagship Hopper line of H100 chipsone of the most prized possessions in the tech industry fetching prices in the tens of thousands of dollars each.

The Blackwell architecture platform will likely be the most successful product in our history and even in the entire computer history, Huang said.

Nvidia briefly eclipsed Microsoft and Apple this month to become the worlds most valuable company in a remarkable rally that hasfueledmuch of this years gains in theS&P 500 index. At more than $3 trillion, Huangs company was at one point worth more than entire economies andstock markets, only to suffer arecord loss in market valueas investors locked in profits.

Yet as long as Nvidia chips continue to be the benchmark for AI training, theres little reason to believe thelonger-term outlookis cloudy, and here thefundamentalscontinue to look robust.

One of Nvidias key advantages is a sticky AI ecosystemknown as CUDA, short for Compute Unified Device Architecture. Much like how everyday consumers are loath to switch from their Apple iOS device to a Samsung phone using Google Android, an entire cohort of developers have been working with CUDA for years and feel so comfortable that there is little reason to consider using another software platform. Much like the hardware, CUDA effectively has become a standard of its own.

The Nvidia platform is broadly available through every major cloud provider and computer maker, creating a large and attractive base for developers and customers, which makes our platform more valuable to our customers, Huang added on Wednesday.

The AI trade did take a recent hit after memory-chip supplier Micron Technology, provider of high-bandwidth memory (HBM) chips to companies like Nvidia, forecast fiscal fourth-quarter revenue would only match market expectations of around $7.6 billion.

Shares in Micron plunged 7%, underperforming by a large margin a slight gain in the broader tech-heavy Nasdaq Composite.

In the past, Micron and its Korean rivals Samsung and SK Hynix have seen a cyclical boom-and-bust common to the memory-chip market, long considered a commodity business when compared with logic chips such as graphic processors.

But excitement has surged given the demand for its chips necessary for AI training. Microns stock more than doubled over the past 12 months, meaning investors have already priced in much of managements predicted growth.

The guidance was basically in line with expectations, and in the AI hardware world if you guide in line thats considered a slight disappointment, says Gene Munster, a tech investor with Deepwater Asset Management. Momentum investors just didnt see that incremental reason to be more positive about the story.

Analysts closely track demand for high-bandwidth memory as a leading indicator for the AI industry because it is so crucial for solving the biggest economic constraint facing AI training todaythe issue of scaling.

Costs crucially do not rise in line with a models complexitythe number of parameters it has, which can number into the billionsbut rather grow exponentially. This results in diminishing returns in efficiency over time.

Even if revenue grows at a consistent rate, losses risk ballooning into the billions or even tens of billions a year as a model gets more advanced. This threatens to overwhelm any company that doesnt have a deep-pocketed investor like Microsoft capable of ensuring an OpenAI can still pay the bills, as CEO Sam Altman phrased it recently.

A key reason for diminishing returns is the growing gap between the two factors that dictate AI training performance. The first is a logic chips raw compute poweras measured by FLOPS, a type of calculation per secondand the second is the memory bandwidth needed to quickly feed it dataoften expressed in millions of transfers per second, or MT/s.

Since they work in tandem, scaling one without the other simply leads to waste and cost inefficiency. Thats why FLOPS utilization, or how much of the compute can actually be brought to bear, is a key metric when judging the cost efficiency of AI models.

As Micronpoints out, data transfer rates have been unable to keep pace with rising compute power. The resulting bottleneck, often referred to as the memory wall is a leading cause for todays inherent inefficiency when scaling AI-training models.

That explains why theU.S. government focused heavily onmemory bandwidthwhen deciding which specific Nvidia chips needed to be banned from export to China in order to weaken Beijings AI development program.

On Wednesday, Micron said its HBM business was sold out all the way through the end of the next calendar year, which trails its fiscal year by one quarter, echoingsimilar commentsfrom Korean competitor SK Hynix.

We expect to generate several hundred million dollars of revenue from HBM in FY24 and multiple [billions of dollars] in revenue from HBM in FY25, Micron said on Wednesday.

Go here to see the original:

Nvidia's Jensen Huang plays down competition worries as key supplier disappoints with subdued expectations for AI ... - Fortune

Read More..

Google touts enterprise-ready AI with more facts and less make-believe – The Verge

Vertex AI, the Google Cloud development platform that allows companies to build services using Googles machine learning and large language models, is getting new capabilities to help prevent apps and services from pushing inaccurate information. After rolling out general availability for Vertex AIs Grounding with Google Search feature in May which enables models to retrieve live information from the internet Google has now announced that customers will also have the option to improve their services AI results with specialized third-party datasets.

Google says the service will utilize data from providers like Moodys, MSCI, Thomson Reuters, and ZoomInfo and that grounding with third-party datasets will be available in Q3 this year. This is one of several new features that Google is developing to encourage organizations to adopt its enterprise-ready generative AI experiences by reducing how often models spit out misleading or inaccurate information.

Another is high-fidelity mode, which enables organizations to source information for generated outputs from their own corporate datasets instead of Geminis wider knowledge bank. High-fidelity mode is powered by a specialized version of Gemini 1.5 Flash and is available now in preview via Vertex AIs Experiments tool.

Vector Search, which allows users to find images by referencing similar graphics, is also being expanded to support hybrid search. The update is available in public preview and allows those vector-based searches to be paired with text-based keyword searches to improve accuracy. Grounding with Google Search will soon also provide a dynamic retrieval feature that automatically selects if information should be sourced from Geminis established datasets or Google Search for prompts that may require frequently updated resources.

Read the rest here:

Google touts enterprise-ready AI with more facts and less make-believe - The Verge

Read More..

Figma announces big redesign with AI – The Verge

Figma is announcing a bunch of new features at its Config conference today, including a major UI redesign, new generative AI tools to help people more easily make projects, and built-in slideshow functionality.

Lets start with the redesign, which is intended to lay the foundation for the next decade, according to a blog post. Youll see things like a new toolbar, rounded corners, and 200 new icons. As part of the design refresh, the company wants to focus the canvas less on our UI and more on your work and make something thats approachable to new users while still being useful to Figma experts.

Figma says this is the companys third significant redesign since Figmas closed beta launch. The new look is rolling out as part of a limited beta, and users can join a waitlist if they want to try it out.

Beyond the redesign, the headline feature addition is new generative AI tools, which look like a useful way to quickly get started with a design. Theyre basically a Figma-focused version of the draft an email-type AI tools weve seen many times.

In a briefing, Figma chief product officer Yuhki Yamashita showed me an example of how Figma could create an app design for a new restaurant. A few seconds after he typed the prompt into a textbox, Figma mocked up an app with menu listings, a tab bar, and even buttons for delivery partners like Uber Eats and DoorDash. It looked like a generic mobile app mock-up, but Yamashita was able to start tweaking it right away.

In another example, Yamashita asked Figma AI to spin up a design for a recipe page for chocolate chip cookies, and sure enough, it did including an AI-generated image of a cookie. Over Zoom, it looked like a pretty accurate image, but I cant imagine that a basic image of a chocolate chip cookie is hard for an AI generator to make.

Figma is also introducing AI features that could help speed up small tasks in big ways, such as AI-enhanced asset search and auto-generated text in designs instead of generic Lorem ipsum placeholder text.

Ideally, all of the new Figma AI tools will allow people who are newer to Figma to test ideas more easily while letting those who are more well versed in the app iterate more quickly, according to Yamashita. Were using AI to lower the floor and raise the ceiling, Yamashita says in an interview with The Verge something CEO Dylan Field has said to The Verge as well.

Figma AI is launching in a limited beta beginning on Wednesday, and interested users can get on the waitlist. Figma says the beta period will run through the end of the year. While in beta, Figmas AI tools will be free, but the company says it might have to introduce usage limits. Figma is also promising clear guidance on pricing when the AI features officially launch.

In a blog post, Figma also spelled out its approach to training its AI models. All of the generative features were launching today are powered by third-party, out-of-the-box AI models and were not trained on private Figma files or customer data, writes Kris Rasmussen, Figmas CTO. We fine-tuned visual and asset search with images of user interfaces from public, free Community files.

Rasmussen adds that Figma trains its models so they learn patterns and Figma-specificconcepts and tools but not from users content. Figma is also going to let Figma admins control whether Figma can train on customer content, which includes file content created in or uploaded to Figma by a user, such aslayer names and properties, text and images, comments, and annotations, according to Rasmussen.

Figma wont start training on this content until August 15th; however, you should know that Starter and Professional plans are by default opted in to share this data, while Organization and Enterprise plans are opted out.

The company is likely being specific about how it trains its AI models because of Adobes recent terms of service disaster, where the company had to clarify that it wouldnt train AI on your work.

In addition to the redesign and the new AI features, Figma is adding a potentially very practical new tool: Figma Slides, a Google Slides-like feature built right into Figma. Yamashita says that users have already been hacking Figma to find a way to make slides, so now theres an official method to build and share presentations right inside the app.

There are a few Figma-specific features that designers will likely appreciate. Youll be able to tweak designs youve included in the deck in real time using Figmas tools. (Note that those changes will only appear in the deck tweaks wont currently sync back to the original design files, though Yamashita says that Figma wants to make that possible eventually.)

You can also present an app prototype right from the deck, meaning you dont need to make a convoluted screen recording just to demonstrate how one piece connects to another. You can also add interactive features for audience members, like a poll or an alignment scale, where people can plot on a range if they agree or disagree with something.

Figma Slides will be available in open beta beginning on Wednesday. It will be free while in beta but will become a paid feature when it officially launches. The company is also adding new features for its developer mode in Figma, including a ready for dev task list.

This years Config is the first since Adobe abandoned its planned $20 billion acquisition of Figma following regulatory scrutiny. With the dissolution of the merger, Adobe was forced to pay Figma a $1 billion breakup fee.

Read more from the original source:

Figma announces big redesign with AI - The Verge

Read More..

I Wore Meta Ray-Bans in Montreal to Test Their AI Translation Skills. It Did Not Go Well – WIRED

Imagine youve just arrived in another country, you dont speak the language, and you stumble upon a construction zone. The air is thick with dust. Youre tired. You still stink like airplane. You try to ignore the jackhammers to decipher what the signs say: Do you need to cross the street, or walk up another block, or turn around?

I was in exactly such a situation this week, but I came prepared. Id flown to Montreal to spend two days testing the new AI translation feature on Metas Ray-Ban smart sunglasses. Within 10 minutes of setting out on my first walk, I ran into a barrage of confusing orange detour signs.

The AI translation feature is meant to give wearers a quick, hands-free way to understand text written in foreign languages, so I couldnt have devised a better pop quiz on how it works in real time.

As an excavator rumbled, I looked at a sign and started asking my sunglasses to tell me what it said. Before I could finish, a harried Quebecois construction worker started shouting at me and pointing northwards, and I scurried across the street.

Right at the start of my AI adventure, Id run into the biggest limitation of this translation softwareit doesnt, at the moment, tell you what people say. It can only parse the written word.

I already knew that the feature was writing-only at the moment, so that was no surprise. But soon, Id run into its other less-obvious constraints. Over the next 48 hours, I tested the AI translation on a variety of street signs, business signs, advertisements, historical plaques, religious literature, childrens books, tourism pamphlets, and menuswith wildly varied results.

Sometimes it was competent, like when it told me that the book I picked up for my son, Trois Beaux Bbs, was about three beautiful babies. (Correct.) It told me repeatedly that ouvert meant open, which, to be frank, I already knew, but I wanted to give it some layups.

Other times, my robot translator was not up to the task. It told me that the sign for the notorious adult movie theater Cinma LAmour translated to Cinma LAmour. (F for effortGoogle Translate at least changed it to Cinema Love.)

At restaurants, I struggled to get it to read me every item on a menu. For example, instead of telling me all of the different burger options at a brew pub, it simply told me that there were burgers and sandwiches, and refused to get more specific despite my wheedling.

When I went to an Italian spot the next night, it similarly gave me a broad summary of the offerings rather than breaking them down in detailI was told there were grilled meat skewers, but not, for example, that there were duck confit, lamb, and beef options, or how much they cost.

All in all, right now, the AI translation is more of a temperamental party trick than a genuinely useful travel tool for foreign climes.

To use the AI translation, a glasses-wearer needs to say the following magic words: Hey Meta, look at and then ask it to translate what its looking at.

The glasses take a snapshot of whatever is in front of you, and then tell you about the text after a few seconds of processing. Id expected more straightforward translations, but it rarely spits out word-for-word breakdowns. Instead, it paraphrases what it sees or offers a broad summary.

View original post here:

I Wore Meta Ray-Bans in Montreal to Test Their AI Translation Skills. It Did Not Go Well - WIRED

Read More..

Integrating Artificial Intelligence and Machine Learning in the Marine Corps – War On The Rocks

Every day, thousands of marines perform routine data-collection tasks and make hundreds of data-based decisions. They compile manning data on whiteboards to decide to staff units, screenshot weather forecasts and paste them into weekly commanders update briefings, and submit training entries by hand. But anyone who has used ChatGPT or other large-scale data analytic services in the last two years knows the immense power of generative AI to streamline these processes and improve the quality of these decisions by basing them on fresh and comprehensive data.

The U.S. Marine Corps has finally caught wind. Gen. Eric Smiths new message calls for the service to recognize that [t]echnology has exponentially increased informations effects on the modern battlefield, making our need to exploit data more important than ever. The services stand-in forces operating concept relies on marine operating forces to integrate into networks of sensors, using automation and machine learning to simplify decision processes and kill chains. Forces deployed forward in littoral environments will be sustained by a supply system that uses data analysis for predictive maintenance, identifying which repair parts the force will need in advance.

However, there is a long way to go before these projections become reality. A series of interviews with key personnel in the Marine Corps operating forces and supporting establishment, other services, and combatant commands over the past six months reveal that the service needs to move more quickly if it intends to use AI and machine learning to execute this operating concept. Despite efforts from senior leaders to nudge the service towards integrating AI and machine learning, only incremental progress has been made.

The service depends on marines possessing the technical skills to make data legible to automated analytic systems and enable data-informed decisions. Designating a Marine expeditionary force or one of its major subordinate commands as the lead for data analysis and literacy would unify the services two-track approach by creating an ecosystem that will allow bottom-up creativity, scale innovation across the force, and speed the integration of these technologies into the fleet and supporting establishment.

New Technologys Potential to Transform Operations, Logistics, and Education

AI, machine learning, and data analysis can potentially transform military education, planning, and operations. Experiments at Marine Corps University have shown that they could allow students to hone operational art in educational settings by probing new dimensions of complicated problems and understanding the adversarys system. AI models, trained on enemy doctrinal publications and open-source information about troop employment, can use probabilistic reasoning to predict an enemys response. This capability could supplement intelligence red teams by independently analyzing the adversarys options, improve a staffs capacity for operational planning, or simply give students valuable analytic experience. And NIPRGPT, a new Air Force project, promises to upend mundane staff work by generating documents and emails in a secure environment.

Beyond education and planning, AI and machine learning can transform how the Marine Corps fights. During an operation, AI could employ a networked collection of manned and unmanned systems to reconnoiter and attack an adversary. It could also synthesize and display data from sensor networks more quickly than human analysts or sift through thousands of images to identify particular scenes or locations of interest. Either algorithms can decide themselves or enable commanders to make data-informed decisions in previously unthinkable ways. From AI-enabled decision-making to enhanced situational awareness, this technology has the potential to revolutionize military operations. A team of think tank researchers even used AI recently to rethink the Unified Command Plan.

But, achieving these futuristic visions will require the service to develop technical skills and familiarity with this technology before implementing it. Developing data literacy is a prerequisite to effectively employ advanced systems, and so this skill is as important as anything else the service expects of marines. Before the Marine Corps can use AI-enabled swarms of drones to take a beachhead or use predictive maintenance to streamline supply operations, its workforce needs to know how to work with data analysis tools and be comfortable applying them in everyday work settings.

Delivering for the Marine Corps Today

If the Marine Corps wants to employ machine learning and AI in combat, it should teach marines how to use them in stable and predictable garrison operations. Doing so could save the service tens of thousands of hours annually while increasing combat effectiveness and readiness by replacing the antiquated processes and systems the fleet marine force relies on.

The operating forces are awash with legible data that can be used for analysis. Every unit has records of serialized equipment, weapons, and classified information. Most of these records are maintained in antiquated computer-based programs of record or Excel spreadsheets, offering clear opportunities for optimization.

Furthermore, all marines in the fleet do yearly training and readiness tasks to demonstrate competence in their assigned functions. Nothing happens to this data once submitted in the Marine Corps Training Information Management System no headquarters echelon traces performance over time to ensure that marines are improving, besides an occasional cursory glance during a Commanding Generals Inspection visit. This system is labor intensive, requiring manual entries for each training event and each individual marines results.

Establishing and analyzing performance standards from these events could identify which units have the most effective training regimens. Leaders who outperform could be rewarded, and a Marine expeditionary force could establish best practices across its subordinate units to improve combat readiness. Automating or streamlining data entry and analysis would be straightforward since AI excels at performing repetitive tasks with clear parameters. Doing so would save time while increasing the combat proficiency of the operating forces.

Marines in the operating forces perform innumerable routine tasks that could be easily automated. For example, marines in staff sections grab data and format it into weekly command and staff briefings each week. Intelligence officers retrieve weather forecast data from their higher headquarters. Supply officers insert information supply levels into the brief. Medical and dental readiness numbers are usually displayed in a green/yellow/red stoplight chart. This data is compiled by hand in PowerPoint slide decks. These simple tasks could be automated, saving thousands of hours across an entire Marine expeditionary force. Commanders would benefit by making decisions based on the most up-to-date information rather than relying on stale data captured hours before.

The Marine Corps uses outdated processes and systems that waste valuable time that could be used on training and readiness. Using automation, machine learning, and AI to streamline routine tasks and allow commanders to make decisions based on up-to-date data will enable the service to achieve efficiency savings while increasing its combat effectiveness. In Smiths words, combining human talent and advanced processes [will allow the Marine Corps] to become even more lethal in support of the joint force and our allies and partners.

The Current Marine Corps Approach

The service is slow in moving towards its goals because it has decided, de facto, to pursue a two-track development strategy. It has concentrated efforts and resources at the highest echelons of the institution while relying on the rare confluence of expertise and individual initiative for progress at the lowest levels. This bifurcated approach lacks coherence and stymies progress.

Marine Corps Order 5231.4 outlines the services approach to AI. Rather than making the operating forces the focus of effort, the order weights efforts in the supporting establishment. The supporting establishment has the expertise, resources, and authority to manage a program across the Marine Corps. But it lacks visibility into the specific issues facing individuals that could be solved with AI, machine learning, or automated data analysis.

At the tactical levels of the service, individuals are integrating these tools into their workflows. However, without broader sponsorship, this mainly occurs as the result of happy coincidence: when a single person has the technical skills to develop an automated data solution, recognizes a shortfall, and takes the initiative to implement it. Because the skills required to create, maintain, or customize projects for a unit are uncommon, scaling adoption or expanding the project is difficult. As a result, most individual projects wither on the vine, and machine learning, AI, and data analysis have only sporadically and temporarily penetrated the operating forces.

This two-track approach separates resources and problems. This means that the highest level of service isnt directly involved in success at the tactical level. Tactical echelons dont have the time, resources, or tasking to develop and systematize these skill sets on their own. Whats needed is a flat and collaborative bottom-up approach with central coordination.

The 18th Airborne Corps

Marine Corps doctrine and culture advocate carefully balancing centralized planning with decentralized execution and bottom-up refinement. Higher echelons pass flexible instructions to their subordinates, increasing specificity at each level. Leaders ensure standardization of training, uniformity of effort, and efficient use of resources. Bottom-up experimentation applies new ideas to concrete problems.

Machine learning and data analysis should be no different. The challenge is finding a way to link individual innovation instances with the resources and influence to scale them across the institution. The Armys use of the 18th Airborne Corps to bridge the gap between service-level programs and individual initiatives offers a clear example for how to do so.

The 18th Airborne Corps fills a contingency-response role like the Marine Corps. Located at Fort Liberty, it is the headquarters element containing the 101st and 82nd Airborne Divisions, along with the 10th Mountain and 3rd Infantry Divisions. As part of a broader modernization program, the 18th Airborne Corps has focused on creating a technology ecosystem to foster innovation. Individual soldiers across the corps can build personal applications that aggregate, analyze, and present information in customizable dashboards that streamline work processes and allow for data-informed decision-making.

For example, soldiers from the 82nd Airborne Division created a single application to monitor and perform logistics tasks. The 18th Airborne Corps Data Warfare Company built a tool for real-time monitoring of in-theater supply levels with alerts for when certain classes of supply run low. Furthermore, the command integrates these projects and other data applications to streamline combat functions. For example, the 18th Airborne Corps practices integrating intelligence analysis, target acquisition, and fires through joint exercises like Scarlet Dragon.

As well as streamlining operational workflows, the data analytics improve training and readiness. The 18th Airborne Corps has developed a Warrior Skills training program in which they collect data to establish a baseline against which it can compare individual soldiers skills over time. Finally, some of the barracks at Fort Liberty have embedded QR codes that soldiers scan to check in when theyre on duty.

These examples demonstrate how a unit of data-literate individuals can leverage modern technology to increase the capacity of the entire organization. Many of these projects could not have been scaled beyond institutional boundaries without corps-level sponsorship. Furthermore, because the 18th Airborne Corps is an operational-level command, it connects soldiers in its divisions with the Armys service-level stakeholders.

Designating a Major Command as Service Lead

If the Marine Corps followed the 18th Airborne Corps model, it would designate one operating force unit as the service lead for data analysis and automation to link service headquarters with tactical units. Institutionalizing security systems, establishing boundaries for experimentation, expanding successful projects across a Marine expeditionary force, and implementing a standardized training program would create an ecosystem to cultivate the technical advances service leaders want.

This proposed force would also streamline the interactions between marines and the service and ensure manning continuity for units that develop data systems to ensure efforts do not peter out as individuals rotate to new assignments. Because of its geographic proximity to Fort Liberty, and as 2d Marine Division artillery units have already participated in the recent Scarlet Dragon exercises and thus have some familiarity with the 18th Airborne Corps projects, II Marine Expeditionary Force is a logical choice to serve as the service lead.

Once designated, II Marine Expeditionary Force should establish an office, directorate, or company responsible for the entire forces data literacy and automation effort. This would follow the 18th Airborne Corps model of establishing a data warfare company to house soldiers with specialized technical skills. This unit could then develop a training program to be implemented across the Marine expeditionary force. The focus of this effort would be a rank-and-billet appropriate education plan that teaches every marine in the Marine expeditionary force how to read, work with, communicate, and analyze data using low- or no-code applications like PowerBI or the Armys Vantage system, with crucial billets learning how to build and maintain these applications. Using the work it is undertaking with Training and Education Command, combined with its members academic and industry expertise, the Marine Innovation Unit (of which I am a member) could develop a training plan based on the Armys model that II Marine Expeditionary Force could use and would work alongside the proposed office to create and implement this training plan.

This training plan will teach every marine the rudimentary skills necessary to implement simple solutions for themselves. The coordinating office will centralize overhead, standardize training, and scale valuable projects across the whole Marine expeditionary force. It would link the high-level service efforts with the small-scale problems facing the operating forces that data literacy and automation could fix.

All the individuals interviewed agreed that engaged and supportive leadership has been an essential precondition for all successful data automation projects. Service-level tasking should ensure that all subordinate commanders take the initiative seriously. Once lower-echelon units see the hours of work spent on rote and mundane tasks that could be automated and then invested back into training and readiness, bureaucratic politics will melt away, and implementation should follow. The key is for a leader to structure the incentives for subordinates to encourage the first generation of adopters.

Forcing deploying units to perform another training requirement could overburden them. However, implementing this training carefully would ensure it is manageable. The Marine expeditionary force and its subordinate units headquarters are not on deployment rotations, so additional training would not detract from their pre-deployment readiness process. Also, implementing these technologies would create significant time savings, freeing up extra time and manpower for training and readiness tasks.

Conclusion

Senior leaders across the Department of Defense and Marine Corps have stated that AI and machine learning are the way forward for the future force. The efficiency loss created by the services current analog processes and static data (let alone the risk to mission and risk to force associated with these antiquated processes in a combat environment) is enough reason to adopt this approach. However, discussions with currently serving practitioners reveal that the Marine Corps needs to move more quickly. It has pursued a two-track model with innovation at the lowest levels and resources at the highest. Bridging the gap between these parallel efforts will be critical to meaningful progress.

If the Marine Corps intends to incorporate AI and machine learning into its deployed operations, it should build the groundwork by training its workforce and building familiarity during garrison operations. Once marines are familiar with and able to employ these tools in a stable and predictable environment, they will naturally use them when deployed to a hostile littoral zone. Designating one major command to act as the service lead would go a long way toward accomplishing that goal. This proposed command would follow the 18th Airborne Corps model of linking the strategic and tactical echelons of the force and implementing new and innovative ways of automating day-to-day tasks and data analysis. Doing so will streamline garrison operations and improve readiness.

Will McGee is an officer in the U.S. Marine Corps Reserves, currently serving with the Marine Innovation Unit. The views in this article are the authors and do not represent those of the Marine Innovation Unit, the U.S. Marine Corps, the Defense Department, or any part of the U.S. government.

Image: Midjourney

Read more here:

Integrating Artificial Intelligence and Machine Learning in the Marine Corps - War On The Rocks

Read More..

Goldman Sachs: Is there "too much spend, too little benefit" in AI craze? By Investing.com – Investing.com

Investing.com --Tech giants and other firms are set to spend roughly $1 trillion in the coming years on developing their artificial intelligence capabilities, including investments in data centers, chips and other AI-related infrastructure, according to analysts at Goldman Sachs.

But they argued that these expenditures have so far failed to yield much "beyond reports of efficiency gains" among AI developers, while Nvidia (NASDAQ:) -- the Wall Street darling and focal point of the craze around the nascent technology -- has seen its shares "sharply correct."

To explore whether heavy corporate spending on AI will deliver meaningful "benefits and returns," the investment bank spoke with a series of experts, including Daron Acemoglu, a professor at the Massachusetts Institute of Technology who specializes in economics.

AI spending's potential impact on productivity

Acemoglu took a largely skeptical stance on the outcome of the capital rush, estimating that only a quarter of AI-related actions will be "cost-effective to automate" within 10 years -- implying that AI will effect less than 5% of all tasks.

"Over this [10-year] horizon, AI technology will [...] primarily increase the efficiency of existing production processes by automating certain tasks or by making workers who perform these tasks more productive," Acemoglu told Goldman Sachs. "So, estimating the gains in productivity and growth from AI technology on a shorter horizon depends wholly on the number of production processes that the technology will impact and the degree to which this technology increases productivity or reduces costs over this timeframe."

However, he predicted that there will not be a "massive" number of tasks that will be impacted by AI in the near term, adding that most actions humans currently perform -- such as manufacturing or mining -- are "multifaceted and require real-world interaction." Instead, Acemoglu said he expects AI will have the biggest influence in the coming years on "pure mental tasks," adding that while the amount of these actions will be "non-trivial" it will not be "huge."

Ultimately, Acemoglu forecast that AI will increase U.S. productivity by only 0.5% and bolster overall economic growth by 0.9% over the next decade.

AI's "limitations"

Acemoglu added he was "less convinced" that Big Tech's plans to greatly increase the amount of data and processing power they plug into AI models will lead to faster improvements of these systems.

"Including twice as much data from [social media platform] Reddit into the next version of [OpenAI's chatbot] [ChatGPT] may improve its ability to predict the next word when engaging in an informal conversation, but it won't necessarily improve a customer service representatives ability to help a customer troubleshoot problems with their video service," he said.

The quality of data is also crucial, Acemoglu noted, flagging that it remains unclear what will be the major sources of high-end information or whether it can be obtained "easily and cheaply."

Finally, he warned that the current architecture of AI technology itself "may have limitations."

"Human cognition involves many types of cognitive processes, sensory inputs, and reasoning capabilities. Large language models (LLMs) today have proven more impressive than many people would have predicted, but a big leap of faith is still required to believe that the architecture of predicting the next word in a sentence will achieve capabilities as smart as HAL 9000 in '2001: A Space Odyssey,'" Acemoglu said, referring to the fictional artificial intelligence character in a popular 1960s science fiction film.

An AI "bubble" or a "promising" spending cycle?

The Goldman Sachs analysts assumed a mixed approach to the crush of spending on AI, with some saying the technology has yet to show it can perform the complex problems needed to justify the elevated expenditures.

These researchers also said they do not anticipate that AI costs will ever decline to such an extent that it will be affordable for companies to automate a large portion of tasks. Fundamentally, they said the AI story that has driven an uptick in the benchmark so far this year is "unlikely to hold up."

Despite these concerns, other Goldman Sachs analysts took a more optimistic stance, forecasting that AI could lead to the automation of a quarter of all work actions. The current uptick in capital expenditures, they argued, seems "more promising" than prior spending cycles because "incumbents with low costs of capital and massive distribution networks and customer bases are leading it." They also predicted that U.S. productivity would improve by 9% and economic activity would grow by 6.1% cumulatively in the next decade thanks to AI advancements.

Overall, however, the Goldman Sachs analysts concluded that there is "still room for the AI theme to run, either because AI starts to deliver on its promise, or because bubbles take a long time to burst."

Read this article:

Goldman Sachs: Is there "too much spend, too little benefit" in AI craze? By Investing.com - Investing.com

Read More..

Meta starts testing user-created AI chatbots on Instagram – TechCrunch

Meta CEO Mark Zuckerberg announced on Thursday that the company will begin to surface AI characters made by creators through Meta AI studio on Instagram. The tests will begin in the U.S.

The social media companys announcement comes on the same day as a16z-backed chatbot company Character.AI is rolling out the ability for users to talk with AI avatars over a call.

In a post on his broadcast channel, Zuckerberg noted that these chatbots will be clearly marked as AI so users are aware.

Rolling out an early test in the U.S. of our AI studio so you might start seeing AIs from your favorite creators and interest-based AIs in the coming weeks on Instagram. These will primarily show up in messaging for now, and will be clearly labeled as AI, he said.

Its early days and the first beta version of these AIs, so well keep working on improving them and make them available to more people soon, Zuckerberg added.

Zuckerberg noted that it worked with creators like the meme account Wasted and technology creator Don Allen Stevenson III to roll out early versions of creator-made chatbots.

In an interview Zuckerberg shared on his social channels, the CEO expanded on the use cases for AI avatars, saying, There needs to be a lot of different APIs that get created to reflect peoples different interests. So a big part of the approach is going to be enabling every creator, and then eventually also every small business on the platform, to create an AI for themselves to help them interact with their community, and their customers if theyre a business, he added.

Creators may also want to use AIs to engage with fans, as they dont currently have time to respond to all the incoming messages.

Still, he admitted that how good the AI avatars will ultimately be is going to become something of an art form that evolves and improves over time.

I dont think we know going into this, what is going to be the most engaging and entertaining and trust-building formula for this, Zuckerberg noted. So we want to give people tools so that you can experiment with this and see what ends up working well, he said.

Meta will initially begin testing the feature with around 50 creators and a small percentage of users, and will then roll it out to more people over the next couple of months, with the hope of having it fully launched by August.

Meta first announced its AI studio last year at its developer conference to let businesses build custom chatbots.

Additional reporting: Sarah Perez

Originally posted here:

Meta starts testing user-created AI chatbots on Instagram - TechCrunch

Read More..

Nigerian blockchain advocacy group warns of repercussions from Binance dispute – crypto.news

Binances current legal turmoil with the Nigerian government has drawn the attention of the nations blockchain advocacy group, which has called for a balanced resolution.

The Blockchain Industry Coordinating Committee of Nigeria (BICCoN) represents the nations blockchain sector. The group has raised concerns about the nations international reputation.

In their statement, BICCoN urged for a balanced approach while keeping in mind the nations duty to protect national interests, such as economic stability and regulatory adherence. The group vouched for a resolution that promotes trust and confidence in the process.

As such, it advocated for an approach that encourages collaboration with global partners and stakeholders.

The group warned that continued delays in resolving the matter could harm the nations blockchain industry.

The detention of Binance executive Tigran Gambaryan has begun to have ripple effects that threaten Nigerias ability to maintain and consolidate crucial collaborations with international partners.

Further, the recent events have also led to a chilling effect on investment. The group has flagged a noticeable decline in foreign investments, which is deemed detrimental to the countrys economic growth.

Moreover, BICCoN also stressed that the situation could lead to a withdrawal of support from international partners. This would leave the nations local regulators and law enforcement without the necessary tools and expertise to effectively regulate the industry.

Nigerian regulators stand to gain immensely from continued access to advanced tools and resources provided by international blockchain entities, the statement said.

As such, the group is concerned that it would hamper the ability of Nigerian authorities to combat financial crimes and ensure a secure environment for all stakeholders.

The statement also acknowledged Gambaryans expertise, claiming that his expertise would have served as an invaluable asset to aid Nigerian regulators in their enforcement efforts.

Instead, his detention undermines the potential for such collaborative efforts, BICCoN added.

BICCoN recommends constructive dialogue with Binance and other relevant stakeholders to work towards a mutually beneficial solution. On top of this, the group also stressed that all processes should be transparent, fair, and adhere to international best practices.

Ultimately, BICCoN concluded that the balanced approach could help resolve the recent issues fairly while also maintaining relations that would help Nigeria create a supportive environment for the nations blockchain sector.

Gambaryan and Binance regional manager Nadeem Anjarwalla were detained on Feb. 26. The duo had traveled to Nigeria to help Binances defense against tax evasion and money laundering charges.

Gambaryan was detained after two meetings with Nigerian authorities. Allegedly, the meetings started as professional but had turned hostile.

His continued detention has stirred political waters, with U.S. lawmakers and officials urging the U.S. presidents intervention.

Most recently, Arkansas Republican Rep. French Hill advocated for the release of Gambaryan during a FOX Business interview. Hill claimed the Binance executive was caught in a Nigerian political fight.

However, Nigerian regulators havent been too responsive despite the escalating situation. On Jun. 6, Nigerian Minister of Information Mohammed Idris defended Gambarayans trial despite several claims that he was being held in harsh conditions while suffering from malaria.

Visit link:

Nigerian blockchain advocacy group warns of repercussions from Binance dispute - crypto.news

Read More..

Binance Ups Security Measures To Prevent Account Misuse – FinanceFeeds

Binance, the worlds largest cryptocurrency exchange, has implemented new security measures to prevent the misuse of account features and improve platform integrity.

According to the exchange, the decision follows the discovery of account misuse that provided certain users with unfair advantages. The new measures aim to foster a healthy and sustainable market environment that benefits all users, it said.

Earlier this month, a Chinese trader lost $1 million after falling victim to a hacking scam involving a promotional Google Chrome plugin called Aggr. The trader accused Binance of failing to implement necessary security measures despite the unusually high trading activity on his account. He also claimed that the exchange did not take timely action even after he reported the issue. According to the trader, Binance was aware of the fraudulent plugin and was conducting an internal investigation but did not inform users or take preventive measures.

Binance has warned that it will take stricter actions against account misuse, including the suspension or termination of accounts if necessary. The exchange added that misuse of accounts harms its reputation and negatively affects the experience of the majority of users who adhere to the rules.

The platform offers various account types, including sub-accounts, managed sub-accounts, and fund manager accounts, which are essential for legitimate use cases. However, Binance said that bad actors have been found misusing these features to circumvent controls, access better fee rates, and obtain higher application programming interface (API) limits. The platform considers unauthorized access to other users accounts a severe breach of its Terms of Use, Know Your Customer (KYC), and Know Your Business (KYB) policies.

To combat account misuse, Binance has increased the monitoring of all account usage and related activities. The platform encourages users to report any suspected misuse incidents and offers rewards for verified cases. The reward amount will be determined on a case-by-case basis, and incidents can be reported to [emailprotected].

Binances new security measures are part of its ongoing efforts to combat security breaches. ZackXBT, a blockchain investigator, praised Binance on June 22 for its support during security incidents, noting the exchanges active role in helping victims and providing incident response.

Binance CEO Richard Teng also highlighted the exchanges collaboration with authorities to investigate a malicious attack on the Turkish crypto exchange BtcTurk, which resulted in the freezing of over $5 million in stolen funds.

Despite these efforts, Binance faces money laundering charges in Nigeria, where authorities have accused the company of illegally moving $26 billion out of the country.

View post:

Binance Ups Security Measures To Prevent Account Misuse - FinanceFeeds

Read More..

Influencing ALT, ETHFI, MEME, IO, PYTH and TNSR, Binance Margin Introduces New FDUSD Trading Pairs – Blockchain News

Binance Margin has announced the addition of new FDUSD trading pairs on Cross and Isolated Margin, according to a recent announcement by the cryptocurrency exchange platform. The new pairs include ALT/FDUSD, ETHFI/FDUSD, IO/FDUSD, MEME/FDUSD, PYTH/FDUSD, TNSR/FDUSD, TAO/FDUSD.

This strategic move aims to enhance the trading experience for users by broadening the variety of trading options available on the platform. Binance Margin continually reviews and expands its list of trading choices, enabling users to diversify their portfolios and benefit from flexible trading strategies.

The announcement detailed the inclusion of the new FDUSD pairs in both Cross and Isolated Margin categories. This expansion is part of Binance's ongoing efforts to provide more comprehensive trading options and improved user experiences.

Binance highlighted the importance of referring to the Margin Data page for the most updated information on marginable assets, specific limits, collateral ratios, and rates. Users are advised to consult this resource to stay informed about the latest margin trading details and any potential changes.

Additionally, Binance issued a disclaimer noting that there may be discrepancies between translated versions of the announcement and the original English version. Users are encouraged to refer to the original version for the most accurate information.

Binance also emphasized the risks associated with digital asset trading, including market risk and price volatility. The platform clarified that the information provided does not constitute financial advice and urged users to carefully consider their investment experience, financial situation, and risk tolerance before engaging in trading activities.

For further details and to view the full announcement, visit the official Binance announcement page.

Continued here:

Influencing ALT, ETHFI, MEME, IO, PYTH and TNSR, Binance Margin Introduces New FDUSD Trading Pairs - Blockchain News

Read More..