Page 979«..1020..978979980981..9901,000..»

Lessons from finances experience with artificial intelligence – Hindustan Times

Who are the earliest adopters of new technologies? Cutting-edge stuff tends to be expensive, meaning the answer is often the extremely rich. Early adopters also tend to be incentivised by cut-throat competition to look beyond the status quo. As such, there may be no group more likely to pick up new tools than the uber-rich and hyper-competitive hedge-fund industry.

This rule appears to hold for artificial intelligence (ai) and machine learning, which were first employed by hedge funds decades ago, well before the recent hype. First came the quants, or quantitative investors, who use data and algorithms to pick stocks and place short-term bets on which assets will rise and fall. Two Sigma, a quant fund in New York, has been experimenting with these techniques since its founding in 2001. Man Group, a British outfit with a big quant arm, launched its first machine-learning fund in 2014. AQR Capital Management, from Greenwich, Connecticut, began using ai at around the same time. Then came the rest of the industry. The hedge funds experience demonstrates ais ability to revolutionise businessbut also shows that it takes time to do so, and that progress can be interrupted.

Ai and machine-learning funds seemed like the final step in the march of the robots. Cheap index funds, with stocks picked by algorithms, had already swelled in size, with assets under management eclipsing those of traditional active funds in 2019. Exchange-traded funds offered cheap exposure to basic strategies, such as picking growth stocks, with little need for human involvement. The flagship fund of Renaissance Technologies, the first ever quant outfit, established in 1982, earned average annual returns of 66% for decades. In the 2000s fast cables gave rise to high-frequency market makers, including Citadel Securities and Virtu, which were able to trade shares by the nanosecond. Newer quant outfits, like AQR and Two Sigma, beat humans returns and gobbled up assets.

By the end of 2019, automated algorithms took both sides of trades; more often than not high-frequency traders faced off against quant investors, who had automated their investment processes; algorithms managed a majority of investors assets in passive index funds; and all of the biggest, most successful hedge funds used quantitative methods, at least to some degree. The traditional types were throwing in the towel. Philippe Jabre, a star investor, blamed computerised models that had imperceptibly replaced traditional actors when he closed his fund in 2018. As a result of all this automation, the stock market was more efficient than ever before. The execution was lightning fast and cost next to nothing. Individuals could invest savings for a fraction of a penny on the dollar.

Machine learning held the promise of still greater fruits. The way one investor described it was that quantitative investing started with a hypothesis: that of momentum, or the idea that stocks which have risen faster than the rest of the index would continue to do so. This hypothesis allows individual stocks to be tested against historical data to assess if their value will continue to rise. By contrast, with machine learning, investors could start with the data and look for a hypothesis. In other words, the algorithms could decide both what to pick and why to pick it.

Yet automations great march forward has not continued unabatedhumans have fought back. Towards the end of 2019 all the major retail brokers, including Charles Schwab, E-trade and TD, Ameritrade, slashed commissions to zero in the face of competition from a new entrant, Robinhood. A few months later, spurred by pandemic boredom and stimulus cheques, retail trading began to spike. It reached a peak in the frenzied early months of 2021 when day traders, co-ordinating on social media, piled into unloved stocks, causing their prices to spiral higher. At the same time, many quantitative strategies seemed to stall. Most quants underperformed the markets, as well as human hedge funds, in 2020 and early 2021. AQR closed a handful of funds after persistent outflows.

When markets reversed in 2022, many of these trends flipped. Retails share of trading fell back as losses piled up. The quants came back with a vengeance. AQR's longest-running fund returned a whopping 44%, even as markets shed 20%.

This zigzag, and robots growing role, holds lessons for other industries. The first is that humans can react in unexpected ways to new technology. The falling cost of trade execution seemed to empower investing machinesuntil costs went to zero, at which point it fuelled a retail renaissance. Even if retails share of trading is not at its peak, it remains elevated compared with before 2019. Retail trades now make up a third of trading volumes in stocks (excluding market makers). Their dominance of stock options, a type of derivative bet on shares, is even greater.

The second is that not all technologies make markets more efficient. One of the explanations for Aqrs period of underperformance, argues Cliff Asness, the firms co-founder, is how extreme valuations became and how long a bubble in everything persisted. In part this might be the result of overexuberance among retail investors. Getting information and getting it quickly does not mean processing it well, reckons Mr Asness. I tend to think things like social media make the market less, not more, efficient...People dont hear counter-opinions, they hear their own, and in politics that can lead to some dangerous craziness and in markets that can lead to some really weird price action.

The third is that robots take time to find their place. Machine-learning funds have been around for a while and appear to outperform human competitors, at least a little. But they have not amassed vast assets, in part because they are a hard sell. After all, few people understand the risks involved. Those who have devoted their careers to machine learning are acutely aware of this. In order to build confidence, we have invested a lot more in explaining to clients why we think the machine-learning strategies are doing what they are doing, reports Greg Bond of Man Numeric, Man Groups quantitative arm.

There was a time when everyone thought the quants had figured it out. That is not the perception today. When it comes to the stock market, at least, automation has not been the winner-takes-all event that many fear elsewhere. It is more like a tug-of-war between humans and machines. And though the machines are winning, humans have not let go just yet.

For more expert analysis of the biggest stories in economics, finance and markets, sign up to Money Talks, our weekly subscriber-only newsletter.

2023, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on http://www.economist.com

Excerpt from:
Lessons from finances experience with artificial intelligence - Hindustan Times

Read More..

BI Governor Claims Artificial Intelligence Will Change Indonesia’s … – Tempo.co English

TEMPO.CO, Jakarta - The Governor of Bank Indonesia (BI) Perry Warjiyo said that artificial intelligence (AI) will change Indonesia's economic landscape during a virtual discussion with the theme "The Future of Indonesia's Economy in the Era of AI" held by Indonesian Bachelor of Economics Association (ISEI).

According to Perry, AI is starting to be utilized in various sectors including the economy, such as manufacturing, finance, health, and transportation. He quoted the survey result from Price Waterhouse Cooper (PwC) that mentioned AI implementation is intended to improve productivity and decision-making quality. "And to help innovate products and services," said Perry during the discussion on Monday, August 7, 2023.

Perry also mentioned that the data from McKinsey Global Institute that estimated the global economic potential of AI to reach US$2.6 trillion, even US$4.4 trillion. "That's the potential of AI and its derivatives. How AI is utilized will also affect the economic landscape in Indonesia," said Perry.

Meanwhile, ISEI Secretary Yan Partawidjaja explained that AI is currently the main theme of various multidimensional discussions, especially with the release of ChatGPT by the US-based AI company, OpenAI.

Questions arise along with the rising popularity of AI, including about the adaptation between decision-makers and entrepreneurs and the worst-case scenario for developing countries such as Indonesia. "Along the way, questions will be asked about how rational expectations work in relation to fiscal and monetary regulations when we ask AI for the best possible outcome," Yan concluded.

MOH. KHORY ALFARIZI

Click here to get the latest news updates from Tempo in Google News

Continued here:
BI Governor Claims Artificial Intelligence Will Change Indonesia's ... - Tempo.co English

Read More..

Improving the Accuracy of Artificial Intelligence: Companies Make … – Fagen wasanni

In recent months, several companies have been working to address the ongoing challenges and improve the accuracy of artificial intelligence (AI) technology. Despite these efforts, some issues still persist.

Companies have recognized the importance of refining AI algorithms to deliver more precise and reliable results. By enhancing the underlying programming, developers aim to minimize errors and enhance the overall performance of AI systems.

Additionally, companies have been investing in extensive data collection and analysis to train AI models effectively. This process involves compiling vast amounts of data, which helps the algorithms learn and adapt to different scenarios. By using diverse and comprehensive datasets, developers hope to increase the accuracy and reduce biases in AI decision-making.

Furthermore, many companies are implementing rigorous testing and evaluation protocols to ensure the effectiveness of AI systems. This involves conducting extensive trials and simulations to identify and rectify any flaws or limitations in the technology. Such testing enables developers to fine-tune AI algorithms and enhance their capabilities.

Despite these endeavors, challenges persist. One of the primary concerns in AI accuracy is the presence of biases within the algorithms. Biases can lead to discriminatory outcomes, particularly in areas such as facial recognition, hiring processes, and criminal justice. Addressing these biases remains a priority for companies dedicated to improving AI technology.

In conclusion, while companies have implemented changes to bolster the accuracy of artificial intelligence, there are still ongoing challenges. Efforts to enhance AI algorithms, extensive data collection, and rigorous testing protocols are steps in the right direction. However, biases within algorithms continue to be a pressing issue that must be addressed to foster fair and unbiased AI technology.

The rest is here:
Improving the Accuracy of Artificial Intelligence: Companies Make ... - Fagen wasanni

Read More..

The Safety of Artificial Intelligence: Balancing Potential and Concerns – Fagen wasanni

With innovation comes growth, investment opportunity, and progress. However, it also brings along growing pains and fears. Just like the advent of the internet in the 1990s, artificial intelligence (AI) is one of the most polarizing technologies of 2023.

The potentially limitless applications of AI, particularly as an investing tool, have raised concerns among our readers. They wonder whether AI is safe or if its negative impacts on society will outweigh the incredible good it has already done for the U.S. economy.

The buzz this year revolves around defining AI itself. When a new technology emerges, there are always different perspectives. Some view it with optimism and enthusiasm (the Baptists), while others approach it with caution and skepticism (the Bootleggers).

One crucial aspect to consider is the impact of AI on total factor productivity growth, which is vital for the economy. AI has been instrumental in enhancing productivity and efficiency across various industries. It is revolutionizing sectors like healthcare, finance, transportation, and many others.

Despite the concerns about the safety of AI, it is important to strike a balance between acknowledging its potential benefits and addressing potential risks. Implementing proper regulations and ethical frameworks can help ensure the responsible development and deployment of AI technology.

While there are valid concerns surrounding AI, it is essential to recognize the transformative power it holds. AI has the potential to improve lives, drive economic growth, and reshape industries. By embracing AI while being mindful of its ethical implications, we can maximize its positive impact while minimizing any negative consequences.

Read the original:
The Safety of Artificial Intelligence: Balancing Potential and Concerns - Fagen wasanni

Read More..

Artificial Intelligence in Real Estate: The Rise of Bot Agents – Fagen wasanni

Artificial intelligence (AI) algorithms are now capable of accurately predicting the price of a house by simply analyzing visual data, such as Google Street View images. However, while this technology offers great potential, it also raises concerns about its impact on the property market.

Visual inspections play a vital role in real estate. Agents gather data on a propertys layout, comparable prices, and neighborhood amenities, but they also rely on in-person visits to make accurate assessments. Skilled professionals can observe subtle details such as potholes, storefronts, car models, and the composition of crowds, all of which provide valuable insights into a propertys value. This street-level assessment is particularly important in identifying up-and-coming neighborhoods before prices reflect their popularity.

Visual AI now has the ability to replicate this street-level analysis on a larger scale. Researchers at MITs Senseable City Lab trained an AI model using 20,000 pictures of homes in Boston and data on how their prices changed over time. Their deep learning algorithm identified correlations between visual features of homes and changes in their values. By incorporating additional variables like structural information and neighborhood amenities, the algorithm accurately predicted how prices would evolve over time.

The potential applications of visual AI extend beyond predicting property values. As demonstrated in a recent study, analyzing 27 million street view images across the US enabled researchers to predict various aspects of a neighborhoods profile, including poverty levels, crime rates, and public health indicators. The next step in this advancement could involve using publicly-accessible photos from real estate websites and social media to assess the interior of homes, identifying features like renovated bathrooms or upscale kitchens.

While these technologies, combined with broader economic indicators like mortgage rates, could become powerful tools for the real estate industry, they also pose certain risks. Algorithms may perpetuate biases, such as undervaluing properties belonging to racial minorities. Furthermore, relying on AI predictions could create self-fulfilling prophecies, as individuals may optimize their homes to impress algorithms rather than meet personal preferences or needs.

To navigate these challenges, a balance of regulation and experimentation is necessary. Increasing the number of AI models in use can prevent undue influence from a single imperfect algorithm. However, it will still be up to human judgment to interpret the insights provided by these new visual AI technologies. While AI can predict much about the world, reimagining a better future remains a uniquely human endeavor.

Read more from the original source:
Artificial Intelligence in Real Estate: The Rise of Bot Agents - Fagen wasanni

Read More..

Artificial intelligence developed to identify cancer mutations – The Straits Times

SINGAPORE The development of personalised cancer treatments will get a boost from an artificial intelligence-based method that can identify cancer mutations in DNA fragments inside tumour samples.

The method, called Variant Network (VarNet), uses deep learning to detect cancer mutations. It is developed by scientists from the Genome Institute of Singapore(GIS), a research institute under the Agency for Science, Technology and Research (A*Star).

Cancer is generally thought to be caused by mutations in our genomes, and its essential to identify these mutations to tailor the most effective treatment for the individual patients, said Dr Anders Skanderup, group leader of GIS Laboratory of Computational Cancer Genomics.

In line with the precision medicine approach where medical treatment is tailored to the individual based on factors such as variations in genetics and environment drugs prescribed for cancer treatment increasingly work only when certain mutations are present, he said.

A high level of accuracy is needed when identifying cancer mutations, he added.

VarNet is a mutation caller, which identifies mutations by sifting through raw DNA sequencing data.

Using artificial intelligence (AI), VarNet is trained to identify mutations through exposure to millions of real cancer mutations as well as to examples of false cancer mutations.

This enables VarNet to detect real mutations while ignoring false ones, Dr Skanderup told The Straits Times.

A paper published in the peer-reviewed scientific journal Nature Communications in July 2022 found VarNet often exceeded existing mutation identification algorithms in terms of accuracy.

While other AI-based methods of detecting cancer mutations exist, these rely heavily on human experts providing vast amounts of detailed training data to the models to train them to identify mutations, he said.

Deep learning an AI method where computers are taught to process data in a way that mimics the human brain allows VarNet to distinguish between real and false mutations, essentially teaching itself the rules of doing so, with minimal human intervention.

The papers first author Kiran Krishnamachari an A*Star Computing and Information Science scholar affiliated with GIS noted VarNet is able to learn to detect mutations from the raw data in a manner that a human expert would do when manually inspecting potential mutations.

This gives us the confidence that the system can learn relevant mutational features when trained on vast sequencing datasets, using our weak-supervision strategy that does not require excessive manual labelling, he said.

Follow this link:
Artificial intelligence developed to identify cancer mutations - The Straits Times

Read More..

When Artificial intelligence writes the doctor’s letter, the doctor and … – Innovation Origins

In Germany alone, around 150 million doctors letters are written every year. This takes precious time, which could be used elsewhere. The doctors letter generator, currently being developed by scientists from the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, could provide a solution for creating the document in a fraction of the time. The application is based on a combination of algorithms and artificial intelligence for Natural Language Processing (NLP). The new white paper Natural Language Processing in the Medical Sector lists the numerous additional opportunities for hospitals using NLP.

Health data is currently one of the fastest-growing data sets. How we process this data and what possibilities it offers for patients, care professionals, and doctors is an exciting question and one to which we have at least part of the answer, explains Dario Antweiler, Healthcare Analytics team leader at Fraunhofer IAIS. He has authored a white paper with his team illuminating the current developments and opportunities for document-based processes in the medical field.

Thanks to Autoscriber, doctors pay more attention to you than to their computer

The pressure on healthcare providers is getting worse. Doctors are unable to give their patients their full attention because they spend a lot of time on administrative tasks.

In the paper, the experts discuss Large Language Models (LLM), which have undergone drastic development in recent months, catapulting them into the public spotlight. The best-known example of an LLM currently is ChatGPT, a chatbot that creates natural-sounding texts. In the not-too-distant future, these models will be able to work multimodally, meaning that theyll be able to process images and tabular data as well as the texts and spoken language with which they already work, explains Antweiler. This opens up new possibilities in the medical sector, which could free up staff for other tasks and improve patient treatment processes while considering data protection at all times.

The healthcare sector faces numerous challenges, such as staff shortages, cost pressures and an information overload from the constantly increasing amounts of data. Much of the hospital data is still laboriously analyzed by hand. Evaluating, analyzing, and drawing conclusions from the data costs valuable time at various points a commodity lacking in the stressful day-to-day of hospitals. In the worst cases, key information goes missing, making treatments more difficult, leading to expensive reexaminations or incomplete accounting, Antweiler explains.

To find a solution for these problems in hospitals, the Healthcare Analytics Team at Fraunhofer IAIS is working closely with medical professionals. Together with several university hospitals, including Essen University Hospital (Universittsmedizin Essen), it is currently developing various possibilities for information extraction from documents. The next objective is to bring the doctors letter generator to the market by the end of 2024, simplifying the creation of discharge letters. To do this, the AI analyzes all existing documents and creates a natural-sounding text containing easy-to-understand patient explanations. After a check, making changes or additions if required, the doctors can send the letter at the click of a button and in a fraction of the time needed to create it themselves from scratch. Another advantage is that patients, who often have to wait for this document on the day of their discharge, can leave the hospital more quickly.

AI halves development time medical innovations; faster and more reliable methods within reach

Research to implement innovations in healthcare takes an average of seventeen years. Especially given the speed of technological developments, that is far too long.

Other functions of Clinical NLP reduce the workload of the medical staff since the AI automatically collates critical information from a patients medical records and makes it available to all clinical staff in a clear, structured format. Information is available in next-to-no time and can be thoroughly processed and made wholly accessible to medical staff. Dario Antweiler says: In most hospitals, countless texts are evaluated manually every day. This is repeated in various departments and again after discharge by the family physician or specialist. Our applications make these processes fully automated, quick and precise, and as regards data protection secure, too. Healthcare systems, and especially staff and patients, would benefit from this.

The rest is here:
When Artificial intelligence writes the doctor's letter, the doctor and ... - Innovation Origins

Read More..

Who is really responsible for ethical AI? – Mobile World Live

In January, TIME revealed Microsoft-backed OpenAI outsourced workers in Kenya in late 2021 to moderate internet data, forming a fundamental part of developing generative AI sensation ChatGPTs safety system.

According to documents seen by the news outlet, moderators earned around $2 per hour to label texts which detailed injuries, sexual abuse and self-harm. These workers were also asked to collect images, some reportedly illegal under US law, in a separate project for OpenAIs image generator DALL-E.

In a statement to TIME, OpenAI explained it took the well-being of its contractors very seriously and that programmes offering support were available through the outsourcing company, which believed its employees did not request for support through the right channels.

The work involved was so traumatic that the company dealing with OpenAI to outsource the jobs cut short its contract with the AI powerhouse, a recent Wall Street Journal article indicated. Meanwhile, a growing body of research continues to reveal the dependence of big technology companies to conduct precious work in the global south as part of a mission to make AI safe.

Surveys conducted over the years also revealed General Purpose AI deployed in biometrics, policing and housing systems have already caused gender and racial discrimination.

As ChatGPT began to fully take off, the recent dismissal of Microsofts responsible AI team raised eyebrows and questions of whether ethical concerns are actually a priority in the multibillion-dollar AI economy.

That is not to say the technology sector as a whole is not taking the risks around generative AI seriously.

Major industry figures did indeed call for a pause in the technologys developments until a robust AI act is in place. However, researchers speaking to Mobile World Live (MWL) believe the public should look a little further beyond policymaking.

Abid Adonis, researcher at Oxford Internet Institute, argues the task of ensuring ethical AI needs to be expanded.

Now, we only see two powers: regulators and big tech, but we also have civil society and scholars. And its important to hear what marginalised groups say about this because its missing from the discussion.

False AIThis view resonates with Dr Alison Powell, associate professor in Media and Communications at the London School of Economics and Political Science and director of JustAI network at the Ada Lovelace Institute.

Powell told MWL the emphasis on artificial general intelligence which industry heavyweights claimed can eclipse humans cognitive abilities and therefore dominate job markets is already in itself harmful.

Its harmful because it focuses on an imagined world rather than the actual world we live in.

This is particularly reflected in Large Language Models (LLM) built on internet data. Powell pointed out that while there are a lot of languages spoken in the actual world, English is largely dominant on the internet.

In the world, there are many ways that people experience things, express ourselves and work together. Not all of these are present online.

Powell further warned about the hype around AIs decision-making abilities and suggested the technologys powers do not take into account social responsibilities.

This somewhat makes sense when considering the fact generative AI posterchild ChatGPT falsely accused law professor Jonathan Turley of assaulting a student and made up a story about the death of Alexander Hanff, a privacy technologist who helped craft GDPR.

Other examples include data-filtering practices in GPT-3, which used a classification system to automatically discard obscene and inappropriate material.

Further flaws in LLM were highlighted in a recent report by The Washington Post, which stated tech companies had grown secretive about what they feed the AI, such as using data from websites that could be deemed discriminatory.

This backed up a study from 2021, which found generative AI has the potential to amplify privileged views, pointing to GPT-2s training data extracted from Reddit, Twitter and Wikipedia, all of which have predominantly male users.

Cultural machinePowell stressed the need to understand the social aspects where technology is more likely to cause harm before considering how to make it more ethical.

AIs are institutional machines, theyre social machines and theyre cultural machines, she argued.

If were walking away from saying, How do we do this technically, in the gears? then we produce that double bind. But if we take a step back, then we notice all of these systems are institutional systems. Thinking about making systems work along the lines of justice and inclusion is about not how the machines work, but how institutions work.

Adonis added a nuanced public discussion on ethical technology will continue to play a strong role in future innovations and policymaking.

If we build strong, fundamental discourses in many places on something we know will have detrimental effects to society, it will permeate into stakeholders and state actors. They will know what to do, and civil society will know what to do.

I believe discourse and paradigm will shape the corridors of innovation.

For Powell, AI governance means enforcing existing laws, particularly those relating to data protection, anti-discrimination and human rights that apply to the institutional settings in which you put AI.

I would continue to advocate for thinking about institutional settings employing AI, rather than thinking about it as an object of regulation itself, she added.

The editorial views expressed in this article are solely those of the author and will not necessarily reflect the views of the GSMA, its Members or Associate Members.

Originally posted here:

Who is really responsible for ethical AI? - Mobile World Live

Read More..

The pleasure and pain of living with AI – The New Indian Express

By Pratik Kanjilal| Published: 05th August 2023 12:01 AMImage used for representational purpose only.

Doomsday stories about rogue artificial intelligence focus on the singularity, an inflexion point at which humans perceive an existential threat because the AI has surpassed their cognitive powers, and they fear the loss of control over themselves and their future. This usually happens in other-worldly situationsin The Terminator, the crisis is in the future, and in 2001: A Space Odyssey, its in interplanetary space. Real-world disaster scenarios have AIs taking over nuclear facilities, tripping national power grids, and hijacking telecom systems and satellites using means that brush aside human controls. But who could have imagined that Hollywood writers, actors and people working in broadcast media would be the first to experience the singularity? Its very appropriate, though, since they are the very people who have brought AI dystopias to life for us.

The US film industry is in the midst of a historic strike led by the Writers Guild of America and the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA). Its effects are being seen in the biggest venuesthe cast of Oppenheimer walked out of the London premiereand in niche events like the San Diego Comic-Con, which Hollywood A-listers skipped this time. Some of the strikers are no doubt driven by Luddite futility. Movies and media have become hugely technology-dependent, and artists will be no more successful in holding off the newest wave than Canute was, even if it threatens their livelihoods. But they should be successful in negotiating better terms for how the industry appropriates their talent because right now, its like the Wild West.

Performing artists are particularly vulnerable because apart from their cultural products, their voices, their bodies, and their very selves are easily appropriated by AIs. In March, an AI researcher used a machine to sing a verse in the manner and voice of Kanye West, and some critics said that it was better than the rapper himself, which must have been pretty crushing. Industry specialists hazard that within a couple of years, every popular singer will have several AI models trained in their worka Taylor Swift studio version, stadium version, and stadium in the rain version (she does sing in the rain). Anyone could write a song and cut a track in any version of her voice. Without legal curbs on machine-made clones of human work, they would be free to exploit it commercially. Upon which, Ms Swift would have three choices: One, stop expecting revenue from streaming and the sale of albums and earn mainly from concerts like the Grateful Dead did. Two, spend the rest of her life suing people globally, which is a depressing choice. Or three, go on strike to seek terms under which she is properly compensated for third parties using her capital.

The writers and artists on strike in America have chosen option three. While singers can lose control of their voices to AIs, writers can be almost totally removed from the picture, and actors can have their entire bodies taken over. These people have rebelled because they face an existential crisis due to studios and owners of production facilities not clarifying how they are to live with AIor will they not survive at all?

While the Hollywood strike has all the eyeballs because its Hollywood, Meta has quietly upended the AI industry by open-sourcing its AI Llama-2you can install it on your laptop and take it offline. Open-sourcing is a smart step. Social media is buzzing with AI experts trying to sell you an interface or some promptsverbal instructions to get an AI to do what you wantbecause they know the gold rush wont last. Artificial general intelligence (a machine you can talk to) will become the default gateway to the internet. That would make it critical infrastructure, and governments would have to guarantee public access to it. AI will be democratised, as Meta anticipates, and will grow faster because of it.

Following the open-sourcing of Llama-2, Nick Clegg, former UK deputy prime minister and president for global affairs at Meta, has been talking up its advantages in India, which is Metas biggest market. It is one way of allaying anxieties about AI. An AI can go rogue only via an unforeseen outcomefor instance, if programmers and trainers have not anticipated an unwanted activity. For instance, it is a good policy to tell chatbots to reply to queries only as an assistant and never as a system user. It prevents machines from playing boss. But unexpected issues can also lurk in the source code of the machine. If the code is open to inspection and manipulation by thousands of programmers and researchers everywhere, the issues are much likelier to be caught and patched than by a handful of bug-chasers employed by a corporation.

The most visible problem with AI chatbots is what the industry calls hallucinationslike propagandists, machines make up facts to substantiate their thinking and even provide Borgesian references to books which do not exist. Open-sourcing is likely to contain such issues faster, leaving humans to deal with the internets real problem, which mass access to AIs is bound to increasefake news created by humans themselves. The problem is never technology, which is value-neutral. The problem is the rogues at the console, who use technology like Archimedes lever to multiply their destructive power.

Pratik Kanjilal

The India CableEditor of

(Tweets @pratik_k)

Read more:

The pleasure and pain of living with AI - The New Indian Express

Read More..

IT pros mull observability tools, devx and generative AI – TechTarget

As platform engineering teams increasingly take on enterprise performance management tasks in production, there have been missed opportunities to give developers insights into their applications, experts say.

The instrumentation of application code is an area where platform engineers and SREs have stepped in on behalf of application developers, who aren't as steeped in the complexities of distributed cloud infrastructure systems such as Kubernetes. Analysts have also seen an increase in observability teams, specifically within the platform engineering discipline that connect developers' application performance insights with underlying infrastructure data.

"[There's] a move toward centralizing observability teams and centers of excellence," said Nancy Gohring, an analyst at IDC. "One driver for doing that is to try to control costs -- and one way those teams are trying to control costs is setting up data [storage] quotas for teams."

Such teams don't replace the need for developers to instrument their own application code but have helped ease the burden of managing the ongoing operational costs associated with collecting observability data, Gohring said.

There are some aspects of infrastructure monitoring, too, that developers don't need to concern themselves with, said Gregg Siegfried, an analyst at Gartner. Still, there remains a divide between the interests of platform teams in production observability and the interests of application developers, Siegfried said.

"I see an emergence of tools trying to give developers closer access to that data to give them more insight -- maybe allow them to put better instrumentation into the software," he said. "But none of them have really set the world on fire yet."

It's a commonly understood best practice in observability that developers instrument their own code before it's deployed to production, the better to manage its performance in the "you build it, you run it" mode of DevOps.

"I'm part of the OpenTelemetry End User Working Group. And recently we had somebody come in and talk to our user community about how they work in a company that really fosters an observability culture," said Adriana Villela, developer advocate at observability vendor LightStep, in a presentation at the recent Monitorama Conference. "The wonderful thing about it is that there is a directive from the executive saying, 'Thou shalt do observability and also developers are the ones instrumenting their own code,' which means that if you've got some disgruntled development team saying, 'I don't have time to instrument my code,' tough [s---]."

But some newer entrants to the market and their early customers question whether the devx, or developer experience, with observability needs to be quite so tough.

"Developers being able to add custom metrics to their code or spans or use observability tools is really critical to help developers take ownership of what they run in production," said Joseph Ruscio, a general partner at Heavybit, an early-stage investor in cloud infrastructure startups, in a Monitorama presentation.

However, to a new engineer, the overwhelming number of tools available for observability is "inscrutable and not at all welcoming to someone new to the craft," Ruscio said.

A production engineering team at a market research company is trying to make this task less onerous for developers using Groundcover's new Kubernetes-based APM tool. Groundcover uses eBPF to automatically gather data from Kubernetes clusters and associate it with specific applications, which could eventually replace the language-specific SDKs developers used to instrument applications using incumbent vendor Datadog.

"For what we are calling custom metrics that monitor a specific application's behavior, these will continue to be the responsibility of the developers," said Eli Yaacov, a production engineer at SimilarWeb, based in New York. "But we, the production engineers, can provide the developers the [rest of] the ecosystem. For example, if they are running Kubernetes, they don't need to worry about [instrumenting for] the default CPU or memory. Groundcover collects all this data in Kubernetes without requiring the developers to integrate with anything into their services."

Other emerging vendors also offer automated instrumentation features in debugging tools to instrument developers' apps without requiring code changes. These include Lightrun and Rookout.

Amid this year's general hype about generative AI, observability vendors have been quick to roll out natural language interfaces for their tools, mostly to add a user-friendly veneer over their relatively complex, often proprietary, data query languages. Such vendors include Honeycomb, Splunk, and most recently, Dynatrace and Datadog.

I see an emergence of tools trying to give developers closer access to that to that data to give them more insight -- maybe allow them to put better instrumentation into the software. Gregg SiegfriedAnalyst, Gartner

However, generative AI interfaces are not necessarily an obvious option to improve the developer experience of using observability tools, Siegfried said, as most developers are comfortable working in code.

"They have better things to do with their time than learn how to use an [application performance management] solution," he said.

Long term, generative AI and artificial general intelligence may have a significant effect, Ruscio said. But in the short term, Siegfried said he is skeptical that large language models such as ChatGPT will make a major impact on observability, particularly the developer experience.

Instead, unlike security and production-level systems monitoring, observability has yet to shift very far left in the development lifecycle, and developers would be best served by changing that, Ruscio said during his presentation. New and emerging vendors -- some of which are among Heavybit's portfolio companies -- are working in this area, termed observability-driven development.

"There's this missing mode where, wouldn't it be nice if you had some input when you are actually writing code as to what does this code look like in production?" Ruscio said. "It's cool that when I ship it, I'll get a graph. But why shouldn't I just know now, in my IDE, [how it will perform?]"

Beth Pariseau, senior news writer at TechTarget, is an award-winning veteran of IT journalism. She can be reached at [emailprotected] or on Twitter @PariseauTT.

Originally posted here:

IT pros mull observability tools, devx and generative AI - TechTarget

Read More..