Page 694«..1020..693694695696..700710..»

Why are fewer women using AI than men? – BBC.com

2 November 2023

Image source, Harriet Kelsall

Harriet Kelsall says she found that popular AI app ChatGPT made too many mistakes

Popular artificial intelligence (AI) chatbot ChatGPT now has more than 180 million users, but jeweller Harriet Kelsall says it isn't for her.

Being dyslexic, she admits that using it might help improve the clarity of her communication with customers on her website. But ultimately she says that she just doesn't trust it.

Ms Kelsall, who is based in Cambridge, says that when she experimented with ChatGPT this year, she noticed errors. She tested it by quizzing it about the crown worn by King Charles III in his coronation back in May, the St Edward's Crown.

"I asked ChatGPT to tell me some information about the crown, just to see what it would say," she says. "I know quite a bit about gemstones in the royal crowns, and I noticed there were large chunks within the text about it which were about the wrong crown."

Ms Kelsall adds that she is also concerned about people "passing off what ChatGPT tells them as independent thought, and plagiarising".

While ChatGPT has become hugely popular since its launch a year ago, Ms Kelsall's reluctance to use it appears to be significantly more common among women than men. While 54% of men now use AI in either their professional or personal lives, this falls to just 35% of women, according to a survey earlier this year.

What are the reasons for this apparent AI gender gap, and should it be a concern?

Image source, Getty Images

ChatGPT now has more than 180 million users around the world

Michelle Leivars, a London-based business coach, says she doesn't use AI to write for her, because she wants to retain her own voice and personality.

"Clients have said they booked sessions with me because the copy on my website didn't feel cookie cutter, and that I was speaking directly to them," she says. "People who know me have gone onto the website, and said that they can hear me saying the words and they could tell it was me straight away."

Meanwhile, Hayley Bystram, also based in London, has not been tempted to save time by using AI. Ms Bystram is the founder of matchmaking agency, Bowes-Lyon Partnership, and meets her clients face-to-face to hand pair them with like-minded others, with no algorithm involved.

"The place where we could use something such as ChatGPT is in our carefully crafted member profiles. which can take up to half a day to create," she says. "But for me it would take the soul and the personalisation out of the process, and it feels like it's cheating, so we carry on doing it the long-winded way."

Image source, Hayley Bystram

Hayley Bystram says that using AI feels like "cheating"

For Alexandra Coward, a business strategist based in Paisley, Scotland, using AI for content generation is just "heavy photoshopping".

She is also particularly concerned about the growing trend of people using AI to create images "that make them look the slimmest, youngest and hippest versions of themselves".

Ms Coward adds: "We're moving towards a space where not only will your clients not recognise you in person, you won't recognise you in person."

While all these seem valid reasons to give AI a wide berth, AI expert Jodie Cook says there are deeper, more ingrained reasons why women are not embracing the technology as much as men.

"Stem fields [science, technology, engineering, and mathematics] have traditionally been dominated by males," says Ms Cook, who is the founder of Coachvox.ai, an app that allows business leaders to create AI clones of themselves.

"The current trend in the adoption of AI tools appears to mirror this disparity, as the skills required for AI are rooted in Stem disciplines."

In the UK, just 24% of the workforce across the Stem sectors are female, and as a consequence "women may feel less confident using AI tools", adds Ms Cook. "Even though many tools don't require technical proficiency, if more women don't view themselves as technically skilled, they might not experiment with them.

"And AI also still feels like science fiction. In the media and popular culture, science fiction tends to be marketed at men."

Ms Cook says that moving forward she wants to see more women both use AI and work in the sector. "As the industry grows, we definitely don't want to see a widening gap between the genders."

Yet psychologist Lee Chambers says that typically female thinking and behaviour may be holding some women back from embracing AI.

"It's the confidence gap - women tend to want to have a high level of competence in something before they start using it, " he says. "Whereas men tend to be happy to go into something without much competence."

Image source, Lee Chambers

Psychologist Lee Chambers says women fear that using AI might raise questions of competence

Mr Chambers also says that women may fear having their ability questioned, if they use AI tools.

"Women are more likely to be accused of not being competent, so they have to emphasise their credentials more to demonstrate their subject matter expertise in a particular field," he says. "There could be this feeling that if people know that you, as a woman, use AI, it's suggesting that you might not be as qualified as you are.

"Women are already discredited, and have their ideas taken by men and passed off as their own, so having people knowing that you use an AI might also play into that narrative that you're not qualified enough. It's just another thing that's debasing your skills, your competence, your value."

Or as Harriet Kelsall puts it: "I value authenticity and human creativity."

Excerpt from:

Why are fewer women using AI than men? - BBC.com

Read More..

Brave responds to Bing and ChatGPT with a new anonymous and … – The Verge

Brave, the privacy-focused browser that automatically blocks unwanted ads and trackers, is rolling out Leo a native AI assistant that the company claims provides unparalleled privacy compared to some other AI chatbot services. Following several months of testing, Leo is now available to use for free by all Brave desktop users running version 1.60 of the web browser. Leo is rolling out in phases over the next few days and will be available on Android and iOS in the coming months.

The core features of Leo arent too dissimilar from other AI chatbots like Bing Chat and Google Bard: it can translate, answer questions, summarize webpages, and generate new content. Brave says the benefits of Leo over those offerings are that it aligns with the companys focus on privacy conversations with the chatbot are not recorded or used to train AI models, and no login information is required to use it. As with other AI chatbots, however, Brave claims Leos outputs should be treated with care for potential inaccuracies or errors.

Brave users can access Leo directly from the browser sidebar, seen here (pictured) on the right of the webpage. Image: Brave

AI can be a powerful tool but it can also present growing concerns for data privacy and theres a need for a privacy-first solution, said Brian Bondy, CTO and co-founder of Brave, in a press release. Brave is committed to pairing AI with user privacy, and will provide our users with secure and personalized AI assistance where they already spend their time online.

Brave says that additional models will be available to Leo Premium users alongside access to higher-quality conversations, priority queuing during peak usage, higher rate limits, and early access to new features. In a statement to The Verge, Brian Bondy, CTO and co-founder of Brave said that Leo is built in a way that many different models can be plugged into the feature. We believe that more models will be offered over time and that users should be able to choose among them.

Update, November 2nd 1.30PM ET: Updated to include a statement from Brave co-founder Brian Bondy regarding future AI models coming to Leo.

See the original post:

Brave responds to Bing and ChatGPT with a new anonymous and ... - The Verge

Read More..

Is This Artificial Intelligence (AI) Stock-Split Stock a Buy After Q3 … – The Motley Fool

As earnings season kicks into high gear, all eyes will be on big tech. Alphabet (GOOG 1.39%) (GOOGL 1.26%) recently reported financial results for the quarter ended Sept. 30. Once again, the company showed noticeable progress in both its advertising unit and cloud segment as competition from TikTok, Meta Platforms, Microsoft, and Amazon lingers.

Over the last several months, Alphabet has invested significant capital in artificial intelligence (AI) applications and integrated the technology across all aspects of its business. Nonetheless, Alphabet stock appears to be taking a bit of a breather at the moment.

Let's dig into the Q3 report and take a look at how AI is fueling growth within Alphabet's ecosystem and assess if investors should scoop up some shares in the face of recent mundane pricing action.

The majority of Alphabet's revenue is captured in two categories: advertising and cloud. The table illustrates the revenue profile of each of these segments for the quarter ended Sept. 30.

Data source: Q3 earnings release. Dollar amounts in millions. Table by author.

On the advertising front, Alphabet increased revenue by 9%, which was primarily fueled by Google Search and YouTube. The company's Services business (which is mostly composed of advertising) grew 11%.

An important dynamic for investors to understand is that the majority of Alphabet's operating profits stem from Services. Per the earnings report, the company increased operating income for Services by 26% during the third quarter and boasted a 35% margin.

To put this into perspective, Alphabet's operating margin for Services in Q3 2022 was 31%. This is a massive expansion in margin, which flows straight to the bottom line.

For the quarter ended Sept. 30, Alphabet reported free cash flow of $22.6 billion, an increase of 40% year over year. By expanding margins and generating more excess cash, Alphabet has been able to invest in additional services and resources. Namely, the company's foray into generative AI is already yielding meaningful returns, underscored by a return to accelerating revenue in advertising, as well as consistently profitable cloud operations.

Let's dig into how Alphabet is integrating AI across the business and what management has to say about its future prospects.

Image source: Getty Images.

One of the more headline-grabbing topics last quarter was coverage of hedge fund manager Bill Ackman's position in Alphabet stock. During recent interviews, Ackman indicated that he is compelled by Alphabet because the company is in a unique position to leverage its vast data repository in such a way that can be stitched together across a wide array of products and services that benefit both consumers and enterprises.

Alphabet's management spent a good portion of the earnings call providing details around how AI is becoming more integrated into Search and Cloud. On the Search front, Alphabet rolled out a feature called Search Generative Experience (SGE). By layering generative AI into Search, Alphabet is effectively trying to increase its surface area on the Internet. Stated differently, SGE provides users with more links to choose from, thereby "creating new opportunities for content to be discovered."

While it is early innings for SGE, management appears optimistic about its potential to disrupt the existing advertising structure native to Search today.

Another promising opportunity rooted in AI is Alphabet's large language model. The model, dubbed Google Bard, was built to be "a complementary experience to Google Search." Since its commercial release earlier this year, Bard has made significant progress. The tool can now be used among many Google apps, including Workspace, YouTube, and Maps.

When it comes to the cloud, Alphabet's results are pretty impressive. The company shared that over half of start-ups focusing on generative AI and that have raised outside capital are customers of Google Cloud.

One of the core pillars of Google Cloud is a multifaceted product called Duet AI. Customers such as PayPal are using Duet AI to increase software development, while others are taking advantage of the tool's data analysis function within Google Workspace apps.

These dynamics underline precisely what Ackman was getting at. In a relatively short time frame, Alphabet has already integrated AI across several different aspects of its business. For this reason, investors could argue that the growth rates in the table have ample opportunity to eclipse their current profile.

GOOG PE Ratio data by YCharts

Alphabet stock is trading well below its prior highs on a price-to-earnings (P/E) and price-to-free cash flow basis. More specifically, the decline becomes more pronounced around the October window, shortly after the company released Q3 earnings.

There is no doubt that Alphabet faces stiff competition from Microsoft and Amazon when it comes to cloud computing and artificial intelligence (AI). But this financial review demonstrates how Alphabet is already benefiting from a suite of products and services connected by AI. Considering that the macroeconomy is still vulnerable to rising interest rates and inflation, I think it's appropriate to believe that Alphabet's advertising and cloud businesses are not even close to peak performance.

My fellow Fool Keith Speights recently referenced Alphabet stock as a "no-brainer buy." I wholeheartedly agree with that position and think now is an incredible opportunity to buy the dip in Alphabet stock and hold for the long term. Alphabet has made incredible progress on its artificial intelligence (AI) roadmap, and the company's strong liquidity profile suggests it has the financial horsepower to continue innovating and releasing additional resources at a fast pace.

As AI becomes more integrated across Alphabet's ecosystem, users should become more engaged and sticky, which will ultimately lead to further top-line growth and margin expansion. From my viewpoint, the current payoff from AI efforts is really encouraging, and the best is yet to come.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fools board of directors. Adam Spatacco has positions in Alphabet, Amazon, Meta Platforms, and Microsoft. The Motley Fool has positions in and recommends Alphabet, Amazon, Meta Platforms, Microsoft, and PayPal. The Motley Fool recommends the following options: short December 2023 $67.50 puts on PayPal. The Motley Fool has a disclosure policy.

See the original post:

Is This Artificial Intelligence (AI) Stock-Split Stock a Buy After Q3 ... - The Motley Fool

Read More..

AI pioneer Yoshua Bengio warns against letting Big Tech control rules – HT Tech

In the world of Artificial Intelligence (AI), there's a big problem that we should pay attention to, says one of the godfathers of AI. A respected figure in the field, Yoshua Bengio, is concerned about the growing power of a few big companies in AI. He's worried that these companies might have too much control over AI technology. He even thinks it's one of the main problems we face when it comes to AI.

Bengio, who has won a prestigious Turing Award for his work in AI, recently talked to Insider about his concerns. He said, "We are making more and more powerful AI systems, and the big question for democracy is who gets to control these systems? Is there a risk that only a few companies will have all the power?" It's an important question that has been on his mind for years, but recent developments, like the emergence of systems like ChatGPT, have made him even more worried about this issue.

Yann LeCun, another important figure in AI, has raised similar concerns. He suggested that influential tech leaders like Sam Altman from OpenAI are trying to control AI by pushing for stricter rules and regulations. However, Bengio doesn't agree with this idea. He doesn't think these tech leaders are trying to take over the AI industry.

Bengio believes that it's clear we should not let the big companies write the rules for AI. But he disagrees with the notion that these companies are trying to manipulate the rules in their favor. He thinks that the rules and regulations, as they are currently being discussed, won't necessarily benefit the big tech companies.

According to Bengio, the proposed regulations are aimed at making sure the big AI systems built by these large companies are closely watched and regulated. This means that the big companies will face more scrutiny and higher costs. However, smaller players who work on more specialised AI or create applications using the big AI systems won't be under the same strict regulations.

In short, Bengio wants to make sure that the AI rules are fair and not controlled by just a few big companies. He believes that regulations should ensure that the powerful AI systems are monitored closely. This way, everyone can benefit from AI technology without worrying about it being controlled by a select few.

See the rest here:

AI pioneer Yoshua Bengio warns against letting Big Tech control rules - HT Tech

Read More..

Deal Dive: AI’s not the only sector dodging the funding slowdown – TechCrunch

A tougher fundraising environment reveals which companies and sectors investors have real conviction in, and which areas arent attractive outside of a bull market. AI startups dominated dealmaking this year, but there is another sector that VCs have stayed committed to: defense tech.

We saw the latest example of this trend just this week. On Tuesday, Shield AI raised a $200 million Series F round led by Thomas Tulls US Innovative Technology Fund, with participation from Snowpoint Ventures and Riot Ventures, among others. The round values the San Diegobased autonomous drone and aircraft startup at $2.7 billion.

The sheer size of the round alone makes this deal interesting. Mega-rounds over $100 million have become uncommon enough to warrant raised eyebrows in todays climate. Through the third quarter of 2023, only 194 rounds above $100 million were raised, compared to 538 in 2022 and 841 in 2021, according to PitchBook. Late-stage fundraising has also been largely muted for much of 2023. Just over $57.3 billion was invested into late-stage startups through the third quarter of this year, much lower than the $94 billion such companies raised in 2022, and the $152 billion we saw in 2021.

Brandon Tseng, the co-founder and president of Shield AI, told TechCrunch+ his company was able to raise in this environment largely because of its metrics. The companys revenue is growing 90% year over year, per Tseng, and it is on the path to becoming profitable in 2025.

This round is also made more interesting by the space the company operates in, since its the latest sign of how much investors have leaned into defense tech in recent years.

Tseng agreed that the investor appetite for companies like his has improved a lot, and he recalled how Shield AIs first few fundraises were particularly hard.

Read the original:

Deal Dive: AI's not the only sector dodging the funding slowdown - TechCrunch

Read More..

Google DeepMinds robotics head on general purpose robots, generative AI and office WiFi – TechCrunch

Image Credits: DeepMind

[A version of this piece first appeared in TechCrunchs robotics newsletter, Actuator.Subscribe here.]

Earlier this month, Googles DeepMind team debuted Open X-Embodiment, a database of robotics functionality created in collaboration with 33 research institutes. The researchers involved compared the system to ImageNet, the landmark database founded in 2009 that is now home to more than 14 million images.

Just as ImageNet propelled computer vision research, we believe Open X-Embodiment can do the same to advance robotics, researchers Quan Vuong and Pannag Sanketi noted at the time. Building a dataset of diverse robot demonstrations is the key step to training a generalist model that can control many different types of robots, follow diverse instructions, perform basic reasoning about complex tasks and generalize effectively.

At the time of its announcement, Open X-Embodiment contained 500+ skills and 150,000 tasks gathered from 22 robot embodiments. Not quite ImageNet numbers, but its a good start. DeepMind then trained its RT-1-X model on the data and used it to train robots in other labs, reporting a 50% success rate compared to the in-house methods the teams had developed.

Ive probably repeated this dozens of times in these pages, but it truly is an exciting time for robotic learning. Ive talked to so many teams approaching the problem from different angles with ever-increasing efficacy. The reign of the bespoke robot is far from over, but it certainly feels as though were catching glimpses of a world where the general-purpose robot is a distinct possibility.

Simulation will undoubtedly be a big part of the equation, along with AI (including the generative variety). It still feels like some firms have put the horse before the cart here when it comes to building hardware for general tasks, but a few years down the road, who knows?

Vincent Vanhoucke is someone Ive been trying to pin down for a bit. If I was available, he wasnt. Ships in the night and all that. Thankfully, we were finally able to make it work toward the end of last week.

Vanhoucke is new to the role of Google DeepMinds head of robotics, having stepped into the role back in May. He has, however, been kicking around the company for more than 16 years, most recently serving as a distinguished scientist for Google AI Robotics. All told, he may well be the best possible person to talk to about Googles robotic ambitions and how it got here.

At what point in DeepMinds history did the robotics team develop?

I was originally not on the DeepMind side of the fence. I was part of Google Research. We recently merged with the DeepMind efforts. So, in some sense, my involvement with DeepMind is extremely recent. But there is a longer history of robotics research happening at Google DeepMind. It started from the increasing view that perception technology was becoming really, really good.

A lot of the computer vision, audio processing, and all that stuff was really turning the corner and becoming almost human level. We starting to ask ourselves, Okay, assuming that this continues over the next few years, what are the consequences of that? One of clear consequence was that suddenly having robotics in a real-world environment was going to be a real possibility. Being able to actually evolve and perform tasks in an everyday environment was entirely predicated on having really, really strong perception. I was initially working on general AI and computer vision. I also worked on speech recognition in the past. I saw the writing on the wall and decided to pivot toward using robotics as the next stage of our research.

My understanding is that a lot of the Everyday Robots team ended up on this team. Googles history with robotics dates back significantly farther. Its been 10 yeas since Alphabet made all of those acquisitions [Boston Dynamics, etc.]. It seems like a lot of people from those companies have populated Googles existing robotics team.

Theres a significant fraction of the team that came through those acquisitions. It was before my time I was really involved in computer vision and speech recognition, but we still have a lot of those folks. More and more, we came to the conclusion that the entire robotics problem was subsumed by the general AI problem. Really solving the intelligence part was the key enabler of any meaningful process in real-world robotics. We shifted a lot of our efforts toward solving that perception, understanding and controlling in the context of general AI was going to be the meaty problem to solve.

It seemed like a lot of the work that Everyday Robots was doing touched on general AI or generative AI. Is the work that team was doing being carried over to the DeepMind robotics team?

We had been collaborating with Everyday Robots for, I want to say, seven years already. Even though we were two separate teams, we have very, very deep connections. In fact, one of the things that prompted us to really start looking into robotics at the time was a collaboration that was a bit of a skunkworks project with the Everyday Robots team, where they happened to have a number of robot arms lying around that had been discontinued. They were one generation of arms that had led to a new generation, and they were just lying around, doing nothing.

We decided it would be fun to pick up those arms, put them all in a room and have them practice and learn how to grasp objects. The very notion of learning a grasping problem was not in the zeitgeist at the time. The idea of using machine learning and perception as the way to control robotic grasping was not something that had been explored. When the arms succeeded, we gave them a reward, and when they failed, we give them a thumbs-down.

For the first time, we used machine learning and essentially solved this problem of generalized grasping, using machine learning and AI. That was a lightbulb moment at the time. There really was something new there. That triggered both the investigations with Everyday Robots around focusing on machine learning as a way to control those robots. And also, on the research side, pushing a lot more robotics as an interesting problem to apply all of the deep learning AI techniques that weve been able to work so well into other areas.

Was Everyday Robots absorbed by your team?

A fraction of the team was absorbed by my team. We inherited their robots and still use them. To date, were continuing to develop the technology that they really pioneered and were working on. The entire impetus lives on with a slightly different focus than what was originally envisioned by the team. Were really focusing on the intelligence piece a lot more than the robot building.

You mentioned that the team moved into the Alphabet X offices. Is there something deeper there, as far as cross-team collaboration and sharing resources?

Its a very pragmatic decision. They have good Wi-Fi, good power, lots of space.

I would hope all the Google buildings would have good Wi-Fi.

Youd hope so, right? But it was a very pedestrian decision of us moving in here. I have to say, a lot of the decision was they have a good caf here. Our previous office had not so good food, and people were starting to complain. There is no hidden agenda there. We like working closely with the rest of X. I think theres a lot of synergies there. They have really talented roboticists working on a number of projects. We have collaborations with Intrinsic that we like to nurture. It makes a lot of sense for us to be here, and its a beautiful building.

Theres a bit of overlap with Intrinsic, in terms of what theyre doing with their platform things like no-code robotics and robotics learning. They overlap with general and generative AI.

Its interesting how robotics has evolved from every corner being very bespoke and taking on a very different set of expertise and skills. To a large extent, the journey were on is to try and make general-purpose robotics happen, whether its applied to an industrial setting or more of a home setting. The principles behind it, driven by a very strong AI core, are very similar. Were really pushing the envelope in trying to explore how we can support as broad an application space as possible. Thats new and exciting. Its very greenfield. Theres lots to explore in the space.

I like to ask people how far off they think we are from something we can reasonably call general-purpose robotics.

There is a slight nuance with the definition of general-purpose robotics. Were really focused on general-purpose methods. Some methods can be applied to both industrial or home robots or sidewalk robots, with all of those different embodiments and form factors. Were not predicated on there being a general-purpose embodiment that does everything for you, more than if you have an embodiment that is very bespoke for your problem. Its fine. We can quickly fine-tune it into solving the problem that you have, specifically. So this is a big question: Will general-purpose robots happen? Thats something a lot of people are tossing around hypotheses about, if and when it will happen.

Thus far theres been more success with bespoke robots. I think, to some extent, the technology has not been there to enable more general-purpose robots to happen. Whether thats where the business mode will take us is a very good question. I dont think that question can be answered until we have more confidence in the technology behind it. Thats what were driving right now. Were seeing more signs of life that very general approaches that dont depend on a specific embodiment are plausible. The latest thing weve done is this RTX project. We went around to a number of academic labs I think we have 30 different partners now and asked to look at their task and the data theyve collected. Lets pull that into a common repository of data, and lets train a large model on top of it and see what happens.

What role will generative AI play in robotics?

I think its going to be very central. There was this large language model revolution. Everybody started asking whether we can use a lot of language models for robots, and I think it could have been very superficial. You know, Lets just pick up the fad of the day and figure out what we can do with it, but its turned out to be extremely deep. The reason for that is, if you think about it, language models are not really about language. Theyre about common sense reasoning and understanding of the everyday world. So, if a large language model knows youre looking for a cup of coffee, you can probably find it in a cupboard in a kitchen or on a table.

Putting a coffee cup on a table makes sense. Putting a table on top of a coffee cup is nonsensical. Its simple facts like that you dont really think about, because theyre completely obvious to you. Its always been really hard to communicate that to an embodied system. The knowledge is really, really hard to encode, while those large language models have that knowledge and encode it in a way thats very accessible and we can use. So weve been able to take this common-sense reasoning and apply it to robot planning. Weve been able to apply it to robot interactions, manipulations, human-robot interactions, and having an agent that has this common sense and can reason about things in a simulated environment, alongside with perception is really central to the robotics problem.

Simulation is probably a big part of collecting data for analysis.

Yeah. Its one ingredient to this. The challenge with simulation is that then you need to bridge the simulation-to-reality gap. Simulations are an approximation of reality. It can be very difficult to make very precise and very reflective of reality. The physics of a simulator have to be good. The visual rendering of the reality in that simulation has to be very good. This is actually another area where generative AI is starting to make its mark. You can imagine instead of actually having to run a physics simulator, you just generate using image generation or a generative model of some kind.

Tye Brady recently told me Amazon is using simulation to generate packages.

That makes a lot of sense. And going forward, I think beyond just generating assets, you can imagine generating futures. Imagine what would happen if the robot did an action? And verifying that its actually doing the thing you wanted it to and using that as a way of planning for the future. Its sort of like the robot dreaming, using generative models, as opposed to having to do it in the real world.

More:

Google DeepMinds robotics head on general purpose robots, generative AI and office WiFi - TechCrunch

Read More..

Are EU regulators ready for concentration in the AI market? – EURACTIV

Artificial Intelligence is the next frontier of market concentration in the internet economy, but experts who spoke to Euractiv feel that even the EUs shiny new regulatory tools might be ill-suited to prevent abuses of market dominance.

In the coming weeks, EU policymakers are expected to finalise the AI Act, a landmark legislation to regulate Artificial Intelligence (AI) based on its capacity to cause harm. Since the draft law was first proposed, the discussion has been disrupted by the meteoric rise of ChatGPT and similar models.

The key to ChatGPTs success was not its use of generative AI, which has been around for some time, but rather the unprecedented scale and performance of its model, OpenAIs GPT-3.5, which has already been surpassed by GPT-4.

As a result, the discussions on the AI Act have been departing from the original horizontal nature of the law in favour of introducing stricter obligations for high impact foundation models like GPT-4.

This more targeted approach focusing on the most impactful actors, which incidentally happen to be primarily non-European companies, has become increasingly recurrent in EU digital policy, from the very large online platforms of the Digital Services Act (DSA) to the gatekeepers of the Digital Markets Act (DMA).

References to these categories are increasingly common in legislative provisions targeting Big Tech companies. However, no such cross-link is available for the EUs AI rulebook due to the DMAs most spectacular failure to date: not managing to designate any cloud service.

Big Tech is leveraging its market power in the cloud sector to gain a dominant position in the AI market. This process has been ongoing for a long time, Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, told Euractiv.

The question of which foundation models should be considered high impact is still a moving target, with policymakers oriented toward a combination of different criteria. However, one of the criteria initially floated has been the amount of computing power used to train the model.

Computing power is a critical component of AI. It is concentrated mainly in the hands of companies that have reached massive economies of scale for their commercial cloud services, hyperscalers like Amazons AWS, Microsofts Azure and Google Cloud.

There is no direct relation between being a hyperscaler and being a leading company in the field of AI. In addition, using the computing power used to train a model as a criterion to designate a high impact foundation model might also have a perverse effect, as investing more initially usually means the model is more robust.

However, training a model is only one part of the equation, as constant computing power is needed to fine-tune the model and its day-to-day operations.

Moreover, the impact of a foundation model is, to a large extent, proportionate to its user base. At the same time, only a few companies worldwide can run an AI model with hundreds of millions of users, such as ChatGPT.

Nobody can build a cutting-edge foundation model without having some kind of partnership with a Big Tech company, Max von Thun, Europes Director for the Open Markets Institute, told Euractiv.

In this context, leading AI companies are partnering up with tech giants without any intervention from competition authorities, as was the case for OpenAI with Microsoft and Anthropic with Amazon. These investments are often accompanied by more or less exclusive arrangements on the underlying cloud infrastructure.

Considering these partnerships as mergers is tricky because it depends on whether the cloud provider has a stake and influence on the generative AI provider and the type of relationship, like whether its an exclusivity or only strategypartnership, Christophe Carugati, an affiliate fellow at Bruegel, told Euractiv.

Behind great Artificial Intelligence, there is great computing power. Computing capacity is a much under-discussed aspect of the AI race, on which we tried to shed some light with Vili Lehdonvirta, professor at the Oxford Internet Institute.

The idea of a foundation model is that it can be adapted to various purposes, as new AI applications can be built on top of them. Since ChatGPTs public launch, the hype around AI has led to the blossoming of thousands of AI-driven companies.

However, the expensive infrastructural costs related to powerful AI models are already pushing this market to concentrate on fewer hands.

Many of the current players are suffering huge losses, largely because of how expensive the models are to run, said Zach Meyers, a Centre for European Reform research fellow.

It seems inevitable that many of the current players will either be left behind or acquired by bigger companies.

According to Andrea Renda, one of the experts who has contributed the most to shaping the AI Act behind the scenes, we are going toward a platformisation of the AI market, whereby most new AI models will be built upon a handful of foundation models.

This market concentration could lead to several ways dominant players could further entrench their position. For instance, when an AI solution is built on a foundation model, the downstream economic operator might be forced to run its AI application on the same cloud infrastructure, in a process known as bundling.

That is already the case when an AI solution is built as an Application Programming Interface (API) to a foundation model, which provides a sort of filter adapting the models response to the needs of the AI solution. As the query is being run directly to the foundation model, the API is supported by its underlying cloud infrastructure.

Conversely, hyperscalers would be incentivised to self-preference or bundle their foundation models with their cloud offers.

What we are witnessing is some of the Big Tech giants occupying the territory by making large investments in a handful of Gen AI companies, without anyone looking into it. Its like we learned nothing from the recent past, antitrust economist Cristina Caffarra told Euractiv.

The usual suspects are grandfathering market power into the future, and there is a lot of hand-wringing, but its already happened, she said.

One way to unbundle the foundation model and the cloud service underneath is by using a fully open-source foundation model. However, these are rather rare since many AI models that claim to be open-source tend to retain critical information.

Andrea Renda, a senior research fellow at the think tank CEPS, has worked on the EUs AI Act since its conception, advised EU policymakers during the negotiations and is currently part of the discussions on the AI Code of Conduct

Self-preferencing and bundling are critical elements that enabled the formation of mono- and oligopolies in critical parts of the internet economy, precisely what the DMA promised to prevent with its ex-ante obligations, as antitrust probes in the online sphere tend to conclude when the damage is already done.

One of the aims of the DMA is to move faster to prevent monopolisation before its too late. Ironically, the platforms designated so far are in markets that are already highly concentrated. With the AI and cloud, there is the possibility to be more proactive,vonThunadded.

The DMA failed to designate any hyperscaler as a gatekeeper because its quantitative thresholds did not fit the cloud sector.

Euractiv understands that France and Germany are pushing the European Commission to launch a market investigation following the qualitative criterion. Still, this process could take years and might take years of litigation to conclude.

Meanwhile, the AI market is moving at break-necking speed, with new generations of foundation models released every few months.

According to Jonathan Sage, a senior policy advisor at Portland, without the DMAs cloud designation, there is little the EU can do to prevent them from creating dependencies between their cloud infrastructure and the foundation models.

Still, the DMAmight be unable to prevent the entrenching of market power in AI since it does not directly cover foundation models.

A more effective solution would be replicating the DMAs systemic approach specifically for foundation models, as it is still unclear what consequences market dominance in this sector will have for downstream operators, Sebastiano Toffaletti, secretary general of the Digital SME Alliance, told Euractiv.

However, putting in place new rules or amending existing ones takes years, which is precisely what the AI market might not have. Anti-trust economist Caffarra stressed it was a matter of timing.

The DMA is looking at old problems but does not have the means to pre-empt a tight oligopoly forming at the foundation level in AI. Its just not the right tool. Before anything moves, it will be far too late, she concluded.

[Edited by Zoran Radosavljevic/Alice Taylor]

Original post:

Are EU regulators ready for concentration in the AI market? - EURACTIV

Read More..

Scientists excited by AI tool that grades severity of rare cancer – BBC.com

1 November 2023

Tina, diagnosed with a sarcoma in June 2022, now has scans every three months

Artificial intelligence is nearly twice as good at grading the aggressiveness of a rare form of cancer from scans as the current method, a study suggests.

By recognising details invisible to the naked eye, AI was 82% accurate, compared with 44% for lab analysis.

Researchers from the Royal Marsden Hospital and Institute of Cancer Research say it could improve treatment and benefit thousands every year.

They are also excited by its potential for spotting other cancers early.

AI is already showing huge promise for diagnosing breast cancers and reducing treatment times.

Computers can be fed huge amounts of information and trained to identify the patterns in it to make predictions, solve problems and even learn from their own mistakes.

"We're incredibly excited by the potential of this state-of-the-art technology," said Professor Christina Messiou, consultant radiologist at The Royal Marsden NHS Foundation Trust and professor in imaging for personalised oncology at The Institute of Cancer Research, London.

"It could lead to patients having better outcomes, through faster diagnosis and more effectively personalised treatment."

Tina's sarcoma was at the back of her abdomen

The researchers, writing in Lancet Oncology, used a technique called radiomics to identify signs, invisible to the naked eye, of retroperitoneal sarcoma - which develops in the connective tissue of the back of the abdomen - in scans of 170 patients.

With this data, the AI algorithm was able to grade the aggressiveness of 89 other European and US hospital patients' tumours, from scans, much more accurately than biopsies, in which a small part of the cancerous tissue is analysed under a microscope.

'Quicker diagnosis'

When dental nurse Tina McLaughlan was diagnosed - in June last year, after stomach pain - with a sarcoma at the back of her abdomen, doctors relied on computerised-tomography (CT) scan images to find the problem.

They decided it was too risky to give her a needle biopsy.

The 65-year-old, from Bedfordshire, had the tumour removed and now returns to the Royal Marsden for scans every three months.

She was not part of the AI trial but told BBC News it would help other patients.

"You go in for the first scan and they can't tell you what it is - they didn't tell me through all my treatment, until the histology, post-op, so it would be really useful to know that straight away," Ms McLaughlan said.

"Hopefully, it would lead to a quicker diagnosis."

'Personalised treatment'

About 4,300 people in England are diagnosed with this type of cancer each year.

Prof Messiou hopes the technology can eventually be used around the world, with high-risk patients given specific treatment while those at low risk are spared unnecessary treatments and follow-up scans.

Dr Paul Huang, from the Institute of Cancer Research, London, said: "This kind of technology has the potential to transform the lives of people with sarcoma - enabling personalised treatment plans tailored to the specific biology of their cancer.

"It's great to see such promising findings."

Read more from the original source:

Scientists excited by AI tool that grades severity of rare cancer - BBC.com

Read More..

Dell GA’s APEX Cloud Platform for Red Hat OpenShift – The New Stack

Dell Technologies recently announced the general availability of its Apex Cloud Platform for Red Hat OpenShift, a turnkey infrastructure platform aimed at simplifying the deployment of microservices containers. It is designed to run in data centers and connect to multiple cloud services.

Apex is the first fully integrated application delivery platform purposely engineered for Red Hat OpenShift, the company said. It was jointly engineered with Red Hat for Dell PowerEdge servers using new, fourth-generation Intel processors. Apex is designed to upgrade how organizations deploy, manage and run containers alone or alongside virtual machines, Dell said.

OpenShift is a hybrid cloud application platform that automates many of the manual processes involved in containerizing, deploying and managing applications. It is built on top of Kubernetes, which has become a de facto cloud operating system for an increasing number of enterprises. The APEX/OpenShift package is bolstered by high-performance infrastructure and advanced DevOps automation tools.

Dell provides several options for the platforms deployment. They are: ground-to-cloud, which brings Apex to the public cloud; cloud-to-ground, enabling the deployment of modern cloud software stacks fully integrated on Dell data center infrastructure; and as-a-service, which brings the subscription cloud to dedicated on-premises environments. Apex Cloud Platform falls under the cloud-to-ground category.

Dell APEX Cloud Platform for Red Hat OpenShift is easily managed through the OpenShift console, so theres no additional steps needed or a need to leave your OpenShift environment to manage your infrastructure, Caitlin Gordon, Dell VP of DevOps product management, told The New Stack. It also includes deep DevOps automation tools and lifecycle management tools that allow developers to move faster, as theyll be focused less on infrastructure management and more on getting critical applications out the door.

While the Dell message indicates that Apex is custom-designed for OpenShift, the same platform also has been customized previously for Microsoft Azure.

Gordon said that the Apex-OpenShift package helps optimize the placement of workloads in physical data centers or in single or multiple cloud deployments.

This features the same software-defined storage as Dell public cloud storage offerings, so devs and IT teams can easily move their applications between on-premises and public cloud environments, Gordon said.

Key features

The availability of Dell Apex Cloud Platform for Red Hat OpenShift comes at a time when enterprises are planning to modernize their IT environments so that they can make application development and workload placements more efficient.

This move toward containers has been a trend for several years. IT researcher Gartner estimates that more than 95% of global enterprises will be running containerized applications in production alone or alongside virtual machines by 2028. This is from A CTOs Guide to Navigating the Cloud-Native Container Ecosystem, published May 31, 2023.

Dell Validated Design for Red Hat OpenShift AI on APEX Cloud Platform is expected to be available as of Oct. 31, the company said.

Continued here:
Dell GA's APEX Cloud Platform for Red Hat OpenShift - The New Stack

Read More..

Apple’s Sales Drop Slightly While Profit Is Up 11 Percent – The New York Times

At a time when the tech industrys biggest companies are rebounding from a post-pandemic dip, Apple is suffering through its most prolonged sales slump in more than a decade.

On Thursday, the worlds most valuable tech company said that sales fell 1 percent, to $89.5 billion, from last year for the three months that ended in September, bringing an end to a fiscal year in which it posted sales declines every quarter. The company reported that profits rose 11 percent, to $22.96 billion.

Apples most important business, the iPhone, rallied last month behind the release of four new devices, which boosted sales 3 percent, to $43.81 billion, from last year. And the companys sales for software and services, such as Apple Music and cloud storage, jumped 16 percent, to $22.31 billion.

But sales sank for most of the companys other businesses, including the Mac, iPad, and the Apple Watch and AirPods. Total product sales dropped by 5 percent, to $67.18 billion.

The results exceeded Wall Streets expectations for $89.34 billion in sales and $21.77 billion in profit. The company said that it expected sales to be similar in the current quarter to the same period a year ago, disappointing Wall Street, which had projected that sales would increase from a year ago.

Apples shares have declined 11 percent from their peak this summer and were down more than 3 percent in after-hour trading on Thursday.

Tim Cook, Apples chief executive, faces a bevy of challenges in the year ahead. After a surge in demand for new 5G iPhones, wireless carriers are reporting a slowdown in the number of people buying new smartphones in the United States, Apples largest market, according to Arete Research, an investment research firm.

The company said that supply constraints for the iPhone 15 in September blunted sales during the month, but that supplies were improving and would be able to fulfill demand in the current quarter.

In China, Apple is confronting renewed competition in the luxury smartphone business from Huawei. The Chinese smartphone maker had been hampered in recent years by U.S. restrictions on its access to 5G technology and Android software, but in August it revealed a jade-green smartphone, Mate 60 Pro, that has the same capabilities as many iPhones. Its release was followed by the Chinese government directing employees of some government agencies to stop using iPhones for work.

Sales of Apples new flagship iPhones declined 4.5 percent during the final weeks of September from last year, according to Counterpoint Research, which analyzes the smartphone market. The dip was an outgrowth of the broader downturn in consumer spending in China, the firm said.

Last month, China expanded its challenge to Apples business by launching a regulatory review of the companys biggest iPhone manufacturer, Foxconn of Taiwan. The manufacturer is facing a tax audit and being investigated over its compliance with land use regulations. The scrutiny comes as Terry Gou, Foxconns founder, runs for Taiwans presidency in a campaign that could boost the ruling party, which opposes closer ties with Beijing.

Mr. Cook traveled to China last month in an unannounced visit that included stops at an Apple store, a visit to the factory of Luxshare Precision, a Chinese iPhone manufacturer, and a meeting with Wang Wentao, the countrys commerce minister.

I could not be more excited about the interactions I had with customers and employees, Mr. Cook said during a call with analysts on Thursday. He said that the companys business in China remained strong, adding that the iPhone business set a record in mainland China during the three months that ended in September.

The broader tech industry has been lifted by enthusiasm for generative artificial intelligence. Last month, Microsoft reported that investments in A.I. were beginning to help sales of its cloud-computing business. Googles parent company, Alphabet, which has invested heavily in A.I., disappointed investors, who had hoped for a greater lift to sales. Amazon and Meta Platforms, Facebooks parent company, also emphasized their investments in that area.

But Apple, which is known for its secrecy, has been quiet about its plans for generative A.I. On Thursday, Mr. Cook said that the company was investing in generative A.I. but was unlikely to provide details until it had a product to bring to market. Were going to do it responsibly, and youre going to see product advancements over time where those technologies are at the heart, Mr. Cook said.

As it looks ahead to next year, much of the companys focus will shift to the release of its first major new product since 2014: high-tech goggles that blend the real world with virtual reality. The $3,500 device, the Vision Pro, has the potential to provide a new revenue stream at a time when sales of its other products have slowed. Analysts project Apple will sell fewer than half a million units.

The company also is focused on reviving sales of its iPads and Macs. On Monday, Apple revealed new MacBook Pros and iMacs with speedier processors and encouraged customers with older Macs to upgrade. Sales of Macs declined 27 percent, to $29.36 billion, over the past fiscal year.

Read this article:
Apple's Sales Drop Slightly While Profit Is Up 11 Percent - The New York Times

Read More..