Page 561«..1020..560561562563..570580..»

Generative AI: the Shortcut to Digital Modernisation – CIO

THE BOOM OF GENERATIVE AI

Digital transformation is the bleeding edge of business resilience. For years, it was underpinned by the adoption of cloud and the modernisation of the IT platform. As transformation is an ongoing process, enterprises look to innovations and cutting-edge technologies to fuel further growth and open more opportunities. Notably, organisations are now turning to Generative AI to navigate the rapidly evolving tech landscape.

Albeit emerging recently, the potential applications of GenAI for businesses are significant and wide-ranging. Businesses are rapidly implementing AI-driven tools into their daily workflows to save valuable time. A recent McKinsey study estimated that automation integrated with Generative AI could accelerate 29.5 percent of the working hours in the US economy. Generative AI can help businesses achieve faster development in two main areas: low/no-code application development and mainframe modernisation.

As Generative AI and low-code technology are increasingly merging, businesses can unlock numerous opportunities while using them in tandem:

Generative AI also plays a role in assisting organisations with the transformation and modernisation of their mainframes, which continue to be in wide use in key sectors such as retail, banking, and aviation.

Research from IBM found that 93 percent of companies still use mainframes for financial management, 73 percent for customer transaction systems, and more than 70 percent of Fortune 500 companies run business-critical applications on mainframes.

However, mainframes are a challenging prospect for transformation because the applications they run are highly complex and difficult to change. Over time, these applications become outdated, the associated cost becomes higher, and operational disruption can occur due to maintaining and updating the system.

Organisations are shifting workloads to hybrid cloud environments while modernising mainframe systems to serve the most critical applications. However, this migration process may involve data transfer vulnerabilities and potential mishandling of sensitive information and outdated programming languages. A poorly structured approach to application modernisation also potentially leads to data breaches.

Hence, organisations are turning to Generative AI to mitigate these risks, bolstering reliability and efficiency in the areas where human error might create vulnerabilities.

By leveraging AI, engineers can quickly generate the code they need for an application migration exercise, ensure its quality, and create the necessary documentation. Even after migration, AI can help generate test cases, maintain and add more features to existing legacy systems, as well as evaluate the similarity between mainframe functions and migrated functions.

Given the scarcity of experts in legacy languages like Cobol on which many mainframe applications are built Generative AI also provides the bridge that allows a broader range of engineers and coding experts to tackle modernisation and migration projects. It equips developers with the necessary knowledge, improving developer efficiency, rapidly resolving issues, and easily maintaining and modernising enterprise systems of various industries.

For instance, FPT Software has recently introduced the development of Masterful AI Assistant or Maia, a special Generative AI concept of an agent specifically assisting with highly complex processes. Its vision is to be the co-pilot and co-worker for developers and engineers, boosting productivity and making the development process more enjoyable and fulfilling.

Through its conversational interface, Maia will deliver guidance and domain know-how along with automating code documentation and co-programming. Maia is also expected to analyse the complexities of legacy systems to ensure accuracy, generate missing documents and suggest suitable modern architecture during the assessment phase, and generate test cases during the testing phase.

While the benefits of embracing AI are significant, maximising those opportunities requires extensive expertise. There are three key considerations that companies need to consider in strategically collaborating with an AI partner:

To this end, the IT service provider FPT Software is currently adopting an ecosystem and partnership approach, covering various areas from research and solutions development to responsible AI, to propel innovation and the practical application of AI.

Particularly, FPT Software, in collaboration with Mila, a Canadian research institute specialising in machine learning, have formed an AI Residency program in which resident researchers work directly with leading academics while participating in real-world projects, assisting organisations to build a suite of products backed by a strong R&D base.

Both organisations have successfully promoted Responsible AI to support sustainable growth, human development, and social progress. This agenda is further strengthened on a global scale with FPT Software joining the recently established AI Alliance, a pivotal initiative formed by leading organisations like IBM and Meta.

The IT firm also partners with visionary partners to develop impactful solutions. A few highlights include its collaboration with Landing AI to develop a computer vision quality inspection solution with visual prompting to shorten labeling time from months to minutes or partner with Silicon Valleys Aitomatic to expand the provision of advanced industrial AI solutions, integrating Open Source Small Specialist Agent (OpenSSA) technology.

Generative AI helps companies accelerate their digital transformation and empower their entire workforce to engage with technology while running the risk of human error.

To successfully harness the power of AI, a partner-led approach is highly critical in navigating potential AI challenges. With the right partner, the results of this next wave of transformation will be remarkable.

Explore how FPT Softwares AI solutions can accelerate your digital transformation.

View post:

Generative AI: the Shortcut to Digital Modernisation - CIO

Read More..

FBI fears China is stealing AI technology to ramp up spying and steal personal information to build terrifying – Daily Mail

China is feared to be stealing artificial intelligence technology to carry out massive cyberattacks on the US and elsewhere.

The FBI is increasingly concerned about the dictatorship's frequent high-profile data thefts from American corporations and government agencies.

Sophisticated AI would allow China to boost the scale and effectiveness of what it could collect and, crucially, analyze, sources told the Wall Street Journal.

The FBI is so worried about this escalation that it and other Western intelligence agencies met with industry leaders in October to discuss the threat.

The US and China are locked in an arms race over the rapidly developing technology that has the capacity to reshape their rivalry and how wars are fought.

China's quest for dominance includes corporate espionage efforts to steal AI technology from the firms developing it.

Former Apple workerXiaolang Zhang was arrested in July 2018 as he tried to board a flight to Beijing with stolen self-driving vehicle trade secrets.

He pleaded guilty to stealing trade secrets and will be sentenced in February.

Then last year,Applied Materials sued Chinese-owned rivalMattson Technology claiming a defecting engineer stole trade secrets.

Rather that AI algorithms, the company makes computer chips powerful enough to run high-end AI programs.

Federal prosecutors got involved but no charges were filed andMattson said there was no evidence it ever used anything allegedly stolen fromApplied in its products.

The FBI was in recent years more interested in thefts from firms like Applied as even if China got its hands on the latest AI programs, they would be obsolete within months.

China was linked to huge data breaches at Marriott, where millions of guest records were stolen,health insurer Elevance Health, and credit agency Equifax.

TheOffice of Personnel Management also had 20 millionpersonnel files of government workers and their families stolen in 2015.

Then in 2021, tens of thousands of servers running Microsoft Exchange Server, which underpins Outlook, were hit - and experts fear previously stolen personal data was used to target the attack.

Earlier this month analysts revealedBeijing's military burrowed into more than 20 major suppliers in the last year alone including a water utility in Hawaii, a major West Coast port and at least one oil and gas pipeline.

They bypassed elaborate cyber security systems by intercepting passwords and log-ins unguarded by junior employees, leaving China'sitting on a stockpile of strategic' vulnerabilities.

Hackers were in August spotted trying to penetrate systems run by the Public Utility Commission of Texas and the Electric Reliability Council of Texas which provide the state's power.

Codenamed Volt Typhoon, the project has coincided with growing tension over Taiwanand could unplug US efforts to protect its interests in the South China Sea.

Communications, manufacturing, utility, transportation, construction, maritime, government, information technology, and education organizations were targeted by Volt Typhoon.

The Director of National Intelligence warned in February that China is already 'almost certainly capable' of launching cyberattacks to disable oil and gas pipelines and rail systems.

'If Beijing feared that a major conflict with the United States were imminent, it almost certainly would consider undertaking aggressive cyber operations against U.S. homeland critical infrastructure and military assets worldwide,' the annual assessment reported.

China was so good at hacking into US companies and government databases that it likely collected more data than it could process and make useful.

But AI technology, combined with its army of hackers, would allow it to comb through billions of records and extract useful information with ease.

Intelligence operatives could use data gleaned from multiple sources to build dossiers on millions of specific people.

This could include fingerprints, financial and health records, passport information, and personal contacts.

China could use them to identify and track spies and monitor the travel of government officials, and figure out who has a security clearance worth targeting.

'China can harness AI to build a dossier on virtually every American, with details ranging from their health records to credit cards and from passport numbers to the names and addresses of their parents and children,' Glenn Gerstell, a former general counsel at the National Security Agency, told the Wall Street Journal.

'Take those dossiers and add a few hundred thousand hackers working for the Chinese government, and we've got a scary potential national security threat.'

Such escalating threats from China meant developing AI technology to counter them was increasingly important.

Industry experts believed AI would be better on defense than offense, and be able to identify and counter attacks from China and elsewhere.

Read more here:

FBI fears China is stealing AI technology to ramp up spying and steal personal information to build terrifying - Daily Mail

Read More..

The Big Questions About AI in 2024 – The Atlantic

Let us be thankful for the AI industry. Its leaders may be nudging humans closer to extinction, but this year, they provided us with a gloriously messy spectacle of progress. When I say year, I mean the long year that began late last November, when OpenAI released ChatGPT and, in doing so, launched generative AI into the cultural mainstream. In the months that followed, politicians, teachers, Hollywood screenwriters, and just about everyone else tried to understand what this means for their future. Cash fire-hosed into AI companies, and their executives, now glowed up into international celebrities, fell into Succession-style infighting. The year to come could be just as tumultuous, as the technology continues to evolve and its implications become clearer. Here are five of the most important questions about AI that might be answered in 2024.

Is the corporate drama over?

OpenAIs Greg Brockman is the president of the worlds most celebrated AI company and the golden-retriever boyfriend of tech executives. Since last month, when Sam Altman was fired from his position as CEO and then reinstated shortly thereafter, Brockman has appeared to play a dual rolepart cheerleader, part glue guyfor the company. As of this writing, he has posted no fewer than five group selfies from the OpenAI office to show how happy and nonmutinous the staffers are. (I leave to you to judge whether and to what degree these smiles are forced.) He described this years holiday party as the companys best ever. He keeps saying how focused, how energized, how united everyone is. Reading his posts is like going to dinner with a couple after an infidelity has been revealed: No, seriously, were closer than ever. Maybe its true. The rank and file at OpenAI are an ambitious and mission-oriented lot. They were almost unanimous in calling for Altmans return (although some have since reportedly said that they felt pressured to do so). And they may have trauma-bonded during the whole ordeal. But will it last? And what does all of this drama mean for the companys approach to safety in the year ahead?

An independent review of the circumstances of Altmans ouster is ongoing, and some relationships within the company are clearly strained. Brockman has posted a picture of himself with Ilya Sutskever, OpenAIs safety-obsessed chief scientist, adorned with a heart emoji, but Altmans feelings toward the latter have been harder to read. In his post-return statement, Altman noted that the company was discussing how Sutskever, who had played a central role in Altmans ouster, can continue his work at OpenAI. (The implication: Maybe he cant.) If Sutskever is forced out of the company or otherwise stripped of his authority, that may change how OpenAI weighs danger against speed of progress.

Is OpenAI sitting on another breakthrough?

During a panel discussion just days before Altman lost his job as CEO, he told a tantalizing story about the current state of the companys AI research. A couple of weeks earlier, he had been in the room when members of his technical staff had pushed the frontier of discovery forward, he said. Altman declined to offer more details, unless you count additional metaphors, but he did mention that only four times since the companys founding had he witnessed an advance of such magnitude.

During the feverish weekend of speculation that followed Altmans firing, it was natural to wonder whether this discovery had spooked OpenAIs safety-minded board members. We do know that in the weeks preceding Altmans firing, company researchers raised concerns about a new Q* algorithm. Had the AI spontaneously figured out quantum gravity? Not exactly. According to reports, it had only solved simple mathematical problems, but it may have accomplished this by reasoning from first principles. OpenAI hasnt yet released any official information about this discovery, if it is even right to think of it as a discovery. As you can imagine, I cant really talk about that, Altman told me recently when I asked him about Q*. Perhaps the company will have more to say, or show, in the new year.

Does Google have an ace in the hole?

When OpenAI released its large-language-model chatbot in November 2022, Google was caught flat-footed. The company had invented the transformer architecture that makes LLMs possible, but its engineers had clearly fallen behind. Bard, Googles answer to ChatGPT, was second-rate.

Many expected OpenAIs leapfrog to be temporary. Google has a war chest that is surpassed only by Apples and Microsofts, world-class computing infrastructure, and storehouses of potential training data. It also has DeepMind, a London-based AI lab that the company acquired in 2014. The lab developed the AIs that bested world champions at chess and Go and intuited protein-folding secrets that nature had previously concealed from scientists. Its researchers recently claimed that another AI they developed is suggesting novel solutions to long-standing problems of mathematical theory. Google had at first allowed DeepMind to operate relatively independently, but earlier this year, it merged the lab with Google Brain, its homegrown AI group. People expected big things.

Then months and months went by without Google so much as announcing a release date for its next-generation LLM, Gemini. The delays could be taken as a sign that the companys culture of innovation has stagnated. Or maybe Googles slowness is a sign of its ambition? The latter possibility seems less likely now that Gemini has finally been released and does not appear to be revolutionary. Barring a surprise breakthrough in 2024, doubts about the companyand the LLM paradigmwill continue.

Are large language models already topping out?

Some of the novelty has worn off LLM-powered software in the mold of ChatGPT. Thats partly because of our own psychology. We adapt quite quickly, OpenAIs Sutskever once told me. He asked me to think about how rapidly the field has changed. If you go back four or five or six years, the things we are doing right now are utterly unimaginable, he said. Maybe hes right. A decade ago, many of us dreaded our every interaction with Siri, with its halting, interruptive style. Now we have bots that converse fluidly about almost any subject, and we struggle to remain impressed.

AI researchers have told us that these tools will only get smarter; theyve evangelized about the raw power of scale. Theyve said that as we pump more data into LLMs, fresh wonders will emerge from them, unbidden. We were told to prepare to worship a new sand god, so named because its cognition would run on silicon, which is made of melted-down sand.

ChatGPT has certainly improved since it was first released. It can talk now, and analyze images. Its answers are sharper, and its user interface feels more organic. But its not improving at a rate that suggests that it will morph into a deity. Altman has said that OpenAI has begun developing its GPT-5 model. That may not come out in 2024, but if it does, we should have a better sense of how much more intelligent language models can become.

How will AI affect the 2024 election?

Our political culture hasnt yet fully sorted AI issues into neatly polarized categories. A majority of adults profess to worry about AIs impact on their daily life, but those worries arent coded red or blue. Thats not to say the generative-AI moment has been entirely innocent of American politics. Earlier this year, executives from companies that make chatbots and image generators testified before Congress and participated in tedious White House roundtables. Many AI products are also now subject to an expansive executive order.

But we havent had a big national election since these technologies went mainstream, much less one involving Donald Trump. Many blamed the spread of lies through social media for enabling Trumps victory in 2016, and for helping him gin up a conspiratorial insurrection following his 2020 defeat. But the tools of misinformation that were used in those elections were crude compared with those that will be available next year.

A shady campaign operative could, for instance, quickly and easily conjure a convincing picture of a rival candidate sharing a laugh with Jeffrey Epstein. If that doesnt do the trick, they could whip up images of poll workers stuffing ballot boxes on Election Night, perhaps from an angle that obscures their glitchy, six-fingered hands. There are reasons to believe that these technologies wont have a material effect on the election. Earlier this year, my colleague Charlie Warzel argued that people may be fooled by low-stakes AI imagesthe pope in a puffer coat, for examplebut they tend to be more skeptical of highly sensitive political images. Lets hope hes right.

Soundfakes, too, could be in the mix. A politicians voice can now be cloned by AI and used to generate offensive clips. President Joe Biden and former President Trump have been public figures for so longand voters perceptions of them are so fixedthat they may be resistant to such an attack. But a lesser-known candidate could be vulnerable to a fake audio recording. Imagine if during Barack Obamas first run for the presidency, cloned audio of him criticizing white people in colorful language had emerged just days before the vote. Until bad actors experiment with these image and audio generators in the heat of a hotly contested election, we wont know exactly how theyll be misused, and whether their misuses will be effective. A year from now, well have our answer.

Read more from the original source:

The Big Questions About AI in 2024 - The Atlantic

Read More..

Apple engages in talks with leading News outlets for AI advancements: Report – Mint

In the past few weeks, Apple has initiated talks with prominent news and publishing entities, aiming to secure approval for utilizing their content in the company's advancement of generative artificial intelligence systems, as reported by the New York Times on Friday.

The California-based tech giant has proposed multiyear agreements, valued at a minimum of $50 million, to obtain licenses for the archives of news articles, as indicated by sources familiar with the negotiations, as reported in the article.

Apple has reached out to news entities such as Cond Nast, the publisher of Vogue and the New Yorker, along with NBC News and IAC, the owner of People, the Daily Beast, and Better Homes and Gardens, as reported by the New York Times.

According to the report, certain publishers approached by Apple showed a tepid response to the outreach. Meanwhile, Apple has also reportedly developed an internal service akin to ChatGPT, intended to assist employees in testing new features, summarizing text, and answering questions based on accumulated knowledge.

In July, Mark Gurman suggested that Apple was in the process of creating its own AI model, with the central focus on a new framework named Ajax. The framework has the potential to offer various capabilities, with a ChatGPT-like application, unofficially dubbed "Apple GPT," being just one of the many possibilities. Recent indications from an Apple research paper suggest that Large Language Models (LLMs) may run on Apple devices, including iPhones and iPads.

This research paper, initially discovered by VentureBeat, is titled "LLM in a flash: Efficient Large Language Model Inference with Limited Memory." It addresses a critical issue related to on-device deployment of Large Language Models (LLMs), particularly on devices with constrained DRAM capacity.

Keivan Alizadeh, a Machine Learning Engineer at Apple and the primary author of the paper, explained, "Our approach entails developing an inference cost model that aligns with the characteristics of flash memory, directing us to enhance optimization in two crucial aspects: minimizing the amount of data transferred from flash and reading data in larger, more cohesive segments."

(With inputs from Reuters)

Milestone Alert!Livemint tops charts as the fastest growing news website in the world Click here to know more.

Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed it's all here, just a click away! Login Now!

Published: 25 Dec 2023, 11:59 AM IST

Excerpt from:

Apple engages in talks with leading News outlets for AI advancements: Report - Mint

Read More..

Google Will Kill ChatGPT and Other Overhyped AI Predictions We Heard In 2023 – Medium

Here are some predictions that I doubt will happen in 2024 or the near future (and why I think so).Midjourney

2023 was the year of AI. Every month, weve seen new AI tools being launched, advancements in the field, upgrades, and more things that kept the field of AI moving.

Overhyped AI predictions werent missing in 2023 either. Throughout the year we heard things like AGI was (or will soon be) achieved or AI will take everyones job.

Heres why I think theyre overhyped and doubt theyll happen in the next years.

Almost every month theres a new ChatGPT killer at least thats what we see on the media. The latest ChatGPT killer (by consensus) was Gemini Ultra, a tool that beat GPT-4 in the benchmarks but isnt available yet to the public.

Even if Gemini Ultra is slightly superior to GPT-4, tech superiority doesnt always translate to market dominance and Google knows that (probably thats why they created too much hype with their demo).

I checked some articles and videos that claim Google will kill ChatGPT to find out how they came to such conclusions. Here are some of the arguments I found.

I don't think any of these arguments are enough to claim that Google will indeed kill ChatGPT.

Why? Well, #2 is not a good metric to say whether a product will kill its competitor. Recently, Google shares sank following reports that some of their AI Gemini Ultra demo was faked. This doesnt mean Gemini Ultra is a bad model or that it cant compete with GPT-4 but shows the consequences of Google overhyping its own product.

On the other hand, even if #1 is true, its not enough. Google might have the resources to create a tool to compete

Go here to see the original:

Google Will Kill ChatGPT and Other Overhyped AI Predictions We Heard In 2023 - Medium

Read More..

Building AI safely is getting harder and harder – The Atlantic

This is Atlantic Intelligence, an eight-week series in which The Atlantics leading thinkers on AI will help you understand the complexity and opportunities of this groundbreaking technology. Sign up here.

The bedrock of the AI revolution is the internet, or more specifically, the ever-expanding bounty of data that the web makes available to train algorithms. ChatGPT, Midjourney, and other generative-AI models learn by detecting patterns in massive amounts of text, images, and videos scraped from the internet. The process entails hoovering up huge quantities of books, art, memes, and, inevitably, the troves of racist, sexist, and illicit material distributed across the web.

Earlier this week, Stanford researchers found a particularly alarming example of that toxicity: The largest publicly available image data set used to train AIs, LAION-5B, reportedly contains more than 1,000 images depicting the sexual abuse of children, out of more than 5 billion in total. A spokesperson for the data sets creator, the nonprofit Large-scale Artificial Intelligence Open Network, told me in a written statement that it has a zero tolerance policy for illegal content and has temporarily halted the distribution of LAION-5B while it evaluates the reports findings, although this and earlier versions of the data set have already trained prominent AI models.

Because they are free to download, the LAION data sets have been a key resource for start-ups and academics developing AI. Its notable that researchers have the ability to peer into these data sets to find such awful material at all: Theres no way to know what content is harbored in similar but proprietary data sets from OpenAI, Google, Meta, and other tech companies. One of those researchers is Abeba Birhane, who has been scrutinizing the LAION data sets since the first versions release, in 2021. Within six weeks, Birhane, a senior fellow at Mozilla who was then studying at University College Dublin, published a paper detailing her findings of sexist, pornographic, and explicit rape imagery in the data. Im really not surprised that they found child-sexual-abuse material in the newest data set, Birhane, who studies algorithmic justice, told me yesterday.

Birhane and I discussed where the problematic content in giant data sets comes from, the dangers it presents, and why the work of detecting this material grows more challenging by the day. Read our conversation, edited for length and clarity, below.

Matteo Wong, assistant editor

More Challenging By the Day

Matteo Wong: In 2021, you studied the LAION data set, which contained 400 million captioned images, and found evidence of sexual violence and other harmful material. What motivated that work?

Abeba Birhane: Because data sets are getting bigger and bigger, 400 million image-and-text pairs is no longer large. But two years ago, it was advertised as the biggest open-source multimodal data set. When I saw it being announced, I was very curious, and I took a peek. The more I looked into the data set, the more I saw really disturbing stuff.

We found there was a lot of misogyny. For example, any benign word that is remotely related to womanhood, like mama, auntie, beautifulwhen you queried the data set with those types of terms, it returned a huge proportion of pornography. We also found images of rape, which was really emotionally heavy and intense work, because we were looking at images that are really disturbing. Alongside that audit, we also put forward a lot of questions about what the data-curation community and larger machine-learning community should do about it. We also later found that, as the size of the LAION data sets increased, so did hateful content. By implication, so does any problematic content.

Wong: This week, the biggest LAION data set was removed because of the finding that it contains child-sexual-abuse material. In the context of your earlier research, how do you view this finding?

Birhane: It did not surprise us. These are the issues that we have been highlighting since the first release of the data set. We need a lot more work on data-set auditing, so when I saw the Stanford report, its a welcome addition to a body of work that has been investigating these issues.

Wong: Research by yourself and others has continuously found some really abhorrent and often illegal material in these data sets. This may seem obvious, but why is that dangerous?

Birhane: Data sets are the backbone of any machine-learning system. AI didnt come into vogue over the past 20 years only because of new theories or new methods. AI became ubiquitous mainly because of the internet, because that allowed for mass harvesting of large-scale data sets. If your data contains illegal stuff or problematic representation, then your model will necessarily inherit those issues, and your model output will reflect these problematic representations.

But if we take another step back, to some extent its also disappointing to see data sets like the LAION data set being removed. For example, the LAION data set came into existence because the creators wanted to replicate data sets inside big corporationsfor example, what data sets used in OpenAI might look like.

Wong: Does this research suggest that tech companies, if theyre using similar methods to collect their data sets, might harbor similar problems?

Birhane: Its very, very likely, given the findings of previous research. Scale comes at the cost of quality.

Wong: Youve written about research you couldnt do on these giant data sets because of the resources necessary. Does scale also come at the cost of auditability? That is, does it become less possible to understand whats inside these data sets as they become larger?

Birhane: There is a huge asymmetry in terms of resource allocation, where its much easier to build stuff but a lot more taxing in terms of intellectual labor, emotional labor, and computational resources when it comes to cleaning up whats already been assembled. If you look at the history of data-set creation and curation, say 15 to 20 years ago, the data sets were much smaller scale, but there was a lot of human attention that went into detoxifying them. But now, all that human attention to data sets has really disappeared, because these days a lot of that data sourcing has been automated. That makes it cost-effective if you want to build a data set, but the reverse side is that, because data sets are much larger now, they require a lot of resources, including computational resources, and its much more difficult to detoxify them and investigate them.

Wong: Data sets are getting bigger and harder to audit, but more and more people are using AI built on that data. What kind of support would you want to see for your work going forward?

Birhane: I would like to see a push for open-sourcing data setsnot just model architectures, but data itself. As horrible as open-source data sets are, if we dont know how horrible they are, we cant make them better.

Related:

P.S.

Struggling to find your travel-information and gift-receipt emails during the holidays? Youre not alone. Designing an algorithm to search your inbox is paradoxically much harder than making one to search the entire internet. My colleague Caroline Mimbs Nyce explored why in a recent article.

Matteo

Original post:

Building AI safely is getting harder and harder - The Atlantic

Read More..

Remote work, AI, and skills-based hiring threaten to put our jobs on the chopping blockbut experts say those fears are overblown – Fortune

In 1897, literary icon Mark Twain is said to have come across his own obituary in a New York newspaper. Asked for his response, tongue partly in cheek, Twain famously said the reports of his death have been greatly exaggerated.

The same sentiment might be said about the current and future state of American white collar work. An influx of headlines propose that millions of jobs are set to disappear within a decade or two. Depending on who you ask or which article you click on, you may well find your job on a list.

Theres no getting around the fact that jobs ten years from now will call on an entirely new set of skillsand maybe an entirely new set of workers. At least, if you ask Harvard Business School management professor and future of work expert Joseph Fullerwhether reports of the death of well-paid, long-standing jobs are in fact exaggerated.

The future of white collar work is going to be different, but jobs wont disappear en masse, Fuller, who co-leads Harvards Managing the Future of Work initiative, tells Fortune. Some skills will always be crucial, so its important to remain agile and continually look for ways of upskilling and not fearing the future.

This prospect isnt quite as daunting as it sounds. The shape of work has morphed and reoriented countless times in the past. On a macro level, consider the Industrial Revolution. For a flash in the pan, consider Y2K mania. But, even in moments of grave uncertainty, people tend to chug along. Humans have always adapted, refining their skills and retrofitting their careers to the current needs of the workforce. And despite a perennial fear of a technocratic future, robots havent nearly caught up to us yet.

The U.S., and indeed the industrialized world, is trending towards a future in which our jobs as they exist today will gradually become unrecognizable. Its the question of just how quickly and widely those changes will take hold thats spurred endless debate. In todays post-COVID landscape, the overarching fear of job disappearance stems from three discrete rising threats: remote work, Generative AI like ChatGPT, and skills-based hiring. But experts say none are quite as threatening as they seem.

Most everyone likes flexible work. But those who have been living it up in their remote-first or fully remote desk jobs since 2020 may be in for a rude awakening. If youve proven you can work from anywhere, your boss could also deduce that it can be done by someone else, somewhere else, for much cheaper. Some experts believe that could lead to a mass exodus of remote jobs in the U.S., potentially within a decade.

If people that code for Google and Facebook were able to live wherever in the U.S. they wanted and [work] for a year and a half without ever going to the office, it seems very, very likely that a lot of companies will be rethinking this longer-term and outsourcing those kinds of jobs that didnt used to be outsourced, Anna Stansbury, an assistant professor of work and organization studies at MIT Sloan School of Management who teaches a course on the future of work, told Fortune last year. Needless to say, the American workforce would seismically change if well-paid white collar jobs suddenly move overseas.

According to additional data from the National Bureau for Economic Research (NBER), more jobs than you might think are in fact highly offshorable. Bosses, paying big-city salaries to workers who long ago relocated to smaller, cheaper towns, are already asking themselves whether someone needs to be physically close to an office or their actual team. Within a few years, work that can be reasonably done remotely by people in such jobs will inevitably be done by telemigrants, NBER said.

But maybe not so fast. Social and cultural contexts across countries [make] it less likely that a public relations specialist or a sales engineer located in Hanoi is a perfect substitute for one located in Seattle, the researchers added.

And an analysis by The Washington Post finds little evidence that this will happen any time soon, at the very least. Even if it does, American white-collar workers are in the best possible position to survive the worst of it. As the Post puts it, theyre the most mobile and most marketable employees in the workforce.

Nothing strikes fear in the hearts of tony Ivy League graduates like the thought that networking connections and ritzy diplomas will soon hold little weight in hiring managers eyes. More and more executives have opened their arms to degreeless workers with demonstrable skillsor an appetite to learn those skills.. The craze has been dubbed skills-based hiring, or skills-first, if you ask former IBM CEO Ginni Rometty, whos been championing the cause for a decade.

The percentage of IBM job listings requiring a four-year degree dropped from 95% in 2011 to under 50% in January 2021, to no discernible effect on productivity. Rometty later told Fortune CEO Alan Murray that hires without college degrees performed just as well as those with PhDs from leading universities.

The growing shift towards skills-based hiring will widen the talent pool, which in turn means bosses can hire someone with an untraditional or less credentialed background to do the same job for less. In simple terms, that may mean reliable entry-level jobs for college grads could disappear so to speak. Or that your job, regardless of level, will be given to a degreeless someone else. But what it really means is recruiting will become more democratizedan easy net positive for the entire workforce. And that you might need to sharpen your skills.

Fuller finds skills-based hiring very valuable. Drawing on his own research on the topic, he said when companies removed a college degree requirement from a job listing, they often then infused new language in the job description, asking for greater social skills, ability to manage, ability to reason, ability to deal with strangers, and executive functioning. (Commonly referred to as soft skills.)

Do I think white collar work will inevitably require a college degree? Absolutely not, he says. It will require certain types of technical or hard skills not necessarily indicated by college.

That may also be the case for AIwhich he deems the biggest threat of them all, although still overblown.

Its hard to ignore the impact of artificial intelligence like ChatGPT, even in its nascent stage. This year alone, 4,000 tech industry jobs have been rendered unnecessary due to the manifold recent technological advancements, per a report from recruiting firm Challenger, Grey and Christmas.

We do believe AI will cause more job loss, though we are surprised how quickly the technology was cited as a reason, senior vice president Andy Challenger told Fortune. It is incredible the speed the technology is evolving and being adapted. Some CEOs have said AI is moving faster than real life, leaving scant hope for tech-averse workers to keep up.

Thats left millions of U.S. workers terrified that theyll lose their jobsnearly one-quarter of them worry that rapidly advancing technology will soon render them obsolete, per a recent Gallup survey. Another study conducted by The Harris Poll in partnership with Fortune found that 40% of workers familiar with ChatGPT worry it will replace them.

But those most at risk of getting displaced arent the tech workers Challengers research focused on; AI is creating new jobs for tech workers just as quickly as old jobs are going extinct. (That doesnt mean new AI coworkers wont lead to, if not an extinction, a pay cut.) The real at-risk workers are those in rote, repetitive jobs.

I wouldnt want to be someone who does the reading or summarization of business books to send out 20-page summaries, because AI is really good at summarization already, Fuller says. A significant chunk of what people do today will go away, he predicted, but nonetheless a material amount of work will remain.

That work will be a lot less dull, a lot less routine, and [have] a lot less filling out of expense resorts or quarterly forecast updates, he adds, reasoning that AI will subsume the tiresome duties, leaving humans with the more interesting tasks. While well still need basic AI know-how, a LinkedIn report finds that robots are less likely to snap up meaningful work and more likely to simply change workflows and outsource repetitive tasks, leaving us better at our jobs so we can focus on our soft skills. Work that relies on judgment, motivation, collaboration, and ideating is the fun part of work, Fuller says, with the added benefit of being much harder to automate.

As in the case with skills-based hiring, its less that AIs preponderance indicates that jobs are disappearing, and more that the needed skills are shifting. For most workers, the future will be less about evaporating job opportunities and more about a pressing need to upskill.

The throughline of each of these threats is that while they may purport to slash job openings, or merely make it easier for someone else to nab your dream job, what they actually do is redefine what a job entailsand who is capable of holding one. At the end of the day, employment is a human-to-human interaction, and these threats dont render soft skills or interpersonal bonds any less valuable.

You have to think about these trends through the lens of human experience and human desire and human biases, Fuller said. The best companies in the future will be using the individual as a unit of analysis. Not the job description, not the paygrade.

And were hardly in a catastrophe. Unemployment this year has held steady at record lows, indicating more jobs than seekers. The bottom line is that the labor market for white-collar jobs is incredibly dynamic, Juan Pablo Gonzalez, senior client partner and sector leader for professional services at Korn Ferry, told the Society of Human Resources Management (SHRM) in June. Work is being reimagined, not eliminated. Its not that the jobs are going away; the jobs are changing.

Besides, Fuller says workers wont hang onto jobs that dont fulfill them because they think their options are limited. People will be picky where they can, he explains, and theyll keep looking for the jobs that dont dominate their lives.

The enduring grand technological innovations are those that eliminate grunt work, in turn creating a new class of jobsnot fewer human jobs altogether. These three much-discussed threats also provide a glimpse into a future that will involve upskilling. If a typical job description ten or twenty years from now looks drastically differentas it is wont to do in our age of rapid advancementat least well have a sense of why.

See the original post here:

Remote work, AI, and skills-based hiring threaten to put our jobs on the chopping blockbut experts say those fears are overblown - Fortune

Read More..

Airbnb using AI technology to crack down on New Year’s Eve parties – 10TV

COLUMBUS, Ohio Airbnb is using new AI technology to help crack down on New Year's Eve parties around the world.

The artificial intelligence technology identifies one-to-three-night booking attempts for entire home listings over the holiday that could potentially be high risk for disruptive and unauthorized parties, then blocks those attempts from being made.

This technology looks at hundreds of signals; like how long the stay is, how far the listing is from their location and if the reservation is being made last minute.

The system is being used in countries and regions in the U.S., including Puerto Rico, Canada, the UK, France, Spain, Australia and New Zealand.

For our guests who are able to make reservations, we require them to add guests to our party policy and if they break the rule, they risk suspension or removal from Airbnb, said Naba Banerjee, head of trust and safety at Airbnb.

Here in Columbus, Erich Schick is the CEO and owner of Air Bulter LLC. He manages 56 short-term rental properties within the Interstate I-270 belt.

"There's no better family feel, there's no way to gather I think than at a short-term rental, said Schick.

He said in the past hes dealt with issues like parties or unwanted guests staying within his properties.

"We would have issues when we would do one-night stays when we didn't have a lot of controls or rules in place, we'd have some events we'd have some parties some not safe situations, said Schick.

Schick said his properties are roughly 70% full for the month of December. But with any new piece of technology, there are hiccups.

Schick said hes run into issues where qualified guests who would book for three nights were blocked by the system.

"We always provide a human touch if a guest is qualified, we can usually get them past the AI roadblocks if we think they're going to be a good guest, he said.

Hes in favor of the new restrictions and said it will help hosts continue to provide great service.

Local News: Recent Coverage

Link:

Airbnb using AI technology to crack down on New Year's Eve parties - 10TV

Read More..

A Once-in-a-Generation Investment Opportunity: 1 Artificial Intelligence (AI) Growth Stock to Buy Now and Hold Forever – The Motley Fool

Microsoft co-founder Bill Gates says artificial intelligence (AI) is the most revolutionary technology he has seen in decades. He formed that opinion after watching ChatGPT ace a college-level biology exam that included open-ended essay questions. Gates shared his thoughts in a recent blog post:

The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the internet, and the mobile phone. It will change the way people work, learn, travel, get healthcare, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.

Take a moment to consider how profoundly those technologies changed the world, as well as the wealth they created in the process. Such inflection points come along rarely, and many experts (including Gates) believe AI is the next one. The best way for investors to capitalize on that once-in-a-generation opportunity is to build a basket of AI stocks.

Here's why Cloudflare (NET -0.62%) belongs in that basket.

To understand why Cloudflare should benefit from the artificial intelligence (AI) boom, investors must first know what the company does and how it compares to peers. The short answer is that Cloudflare makes the internet faster and safer. The longer answer is that it provides a broad range of application, network, security, and developer services that accelerate and protect corporate software and infrastructure.

Cloudflare has differentiated itself through performance and scale. It operates the fastest cloud network and developer platform on the market, and it handles about 20% of internet traffic. Its platform is also cloud neutral, meaning it improves performance and security across public clouds and private data centers. That makes Cloudflare a useful partner even for businesses that rely on other cloud providers like Amazon Web Services and Microsoft Azure.

The upshot of its unmatched performance is that Cloudflare has established a strong presence in several cloud computing verticals, including developer services. Forrester Research recently recognized Cloudflare as the leader in edge development platforms, citing a superior product (i.e., Cloudflare Workers) and a stronger growth strategy compared to other vendors.

Management believes that its value proposition for developers -- unmatched speed and cloud-neutral technology -- will make Cloudflare a key part of the AI value chain. The company is leaning into that opportunity. It recently announced Workers AI, a service that allows businesses to build AI applications and run machine learning models on its network. Workers AI is accelerated by Nvidia GPUsand supported by other Cloudflare products like R2 (object storage) and Vectorize (vector database).

It may be a few years before those innovations become meaningful revenue streams, but the company is very optimistic. CEO Matthew Prince says that "Cloudflare is the most common cloud provider used by the leading AI companies." He also believes Cloudflare is "uniquely positioned to become a leader in AI inferencing," a market that represents the biggest opportunity in AI.

Beyond developer services, Cloudflare also has a strong presence in several cybersecurity markets. Forrester Research recently named the company a leader in email security, and the International Data Corp. recognized its leadership in zero trust network access.

One reason for that success is the data advantage created by its immense scale. As previously mentioned, about 20% of internet traffic flows across the Cloudflare network. That gives the company deep insight into performance issues and security threats across the web, and it uses that information to continuously route traffic more efficiently and counter threats more effectively.

Cloudflare brings together network and security services with Cloudflare One, a secure access service edge (SASE) platform that protects and connects users to private applications, public cloud services, and the open internet. Cloudflare One addresses the widespread push to modernize network security. Consultancy Gartner believes 80% of enterprises will adopt SASE architecture by 2025, up from 20% in 2021.

Cloudflare values its addressable market at $164 billion in 2024, but sees that figure surpassing $200 billion by 2026. Developer services and network security services account for most of that total. Cloudflare already has a strong presence in both markets, meaning the company is well positioned for future growth.

Indeed, Cloudflare ranked No. 6 on the Fortune Future 50 List for 2023, an annual assessment of the world's largest companies based on long-term growth prospects. Making the list at all is an achievement, but taking sixth place is a testament to the company's tremendous potential. The authors attributed Cloudflare's high placement to opportunities in AI inferencing and cybersecurity.

With that in mind, analysts at Morningstar expect the company to grow revenue by 34% annually over the next five years, a reasonable estimate given that revenue increased by 46% annually during the past three years. In that context, the stock's current valuation of 23.5 times sales looks reasonable, and it's certainly a discount to the three-year average of 38.7 times sales.

That said, Cloudflare is not cheap and its share price will likely be volatile. But patient investors comfortable with price swings should feel confident in buying a small position in this growth stock today, especially as part of a broader basket of AI stocks.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Trevor Jennewine has positions in Amazon and Nvidia. The Motley Fool has positions in and recommends Amazon, Cloudflare, Microsoft, and Nvidia. The Motley Fool recommends Gartner. The Motley Fool has a disclosure policy.

See the original post:

A Once-in-a-Generation Investment Opportunity: 1 Artificial Intelligence (AI) Growth Stock to Buy Now and Hold Forever - The Motley Fool

Read More..

Why Bitcoin, Solana, and Near Protocol Jumped on Wednesday – The Motley Fool

The crypto market continued its bullish run on Wednesday with most of the major cryptocurrencies moving higher. There wasn't any major news about the industry overall, but small tidbits are giving investors enough reason to buy.

Bitcoin (BTC -0.34%) is up 3.2% over the last 24 hours as of 4 p.m. ET, Solana (SOL 13.29%) has jumped 10%, and Near Protocol (NEAR 9.61%) is up a whopping 18.6%.

Last night it was announced that Blackrock, Nasdaq, and the Securities and Exchange Commission met yesterday for a second time to discuss rule changes to list a Bitcoin exhcnage-traded fund (ETF). The possibility of an ETF coming to market has been speculation all year and these appear to be at least small steps toward making that a reality.

An ETF could bring new money into the crypto industry by making it more accessible. It's still relatively difficult to buy and sell cryptocurrencies, so a low-cost exchange-traded fund would allow investors to get access without having crypto wallets or working with centralized crypto exchanges.

Solana's rise continues, helped by news that it's now home to more decentralized trading volume than Ethereum. Solana's low cost and fast speed have made it a go-to blockchain for developers and that's helping push the token higher.

Near Protocol is seeing an inflow of investor interest today as the blockchain attracts more developers and collaborations, opening up potential use cases.

The two drivers of cryptocurrency right now are the flow of funds into the industry and the utility being built on the blockchain. All three of these cryptocurrencies are benefiting from that.

A Bitcoin ETF could bring in more investors with a low-cost financial instrument and that could pave the way to more cryptocurrencies getting ETFs. And the SEC ruling positively, in this case, could lead to a thawing for crypto regulation in general.

I think the long-term driver of the industry will be a growing number of use cases for the blockchain. From financial instruments to logistics uses, companies big and small are testing how they can use the blockchain. I think that will open up more use cases and companies built using this technology.

It's less clear how that will benefit specific tokens. Solana, for example, is so low-cost that even a large rise in transactions won't increase the fees, and tokens like the stablecoin USD Coin can be used as the medium of transaction.

Speculation is still rampant in crypto and that's driving a lot of the increase in value short-term. But there are improvements in the underlying technology and utility, which investors shouldn't overlook. And as long as there's an inflow of funds and an increase in innovation the market can keep moving higher.

Travis Hoium has positions in Ethereum and Solana. The Motley Fool has positions in and recommends Bitcoin, Ethereum, and Solana. The Motley Fool recommends Nasdaq. The Motley Fool has a disclosure policy.

Read the original:
Why Bitcoin, Solana, and Near Protocol Jumped on Wednesday - The Motley Fool

Read More..