Category Archives: Artificial General Intelligence
AI Researchers Claim They Can Double the Efficiency of Chatbots – Decrypt
Have you ever noticed that your AI chatbot get lost in the middle of a conversation, or it simply says it cannot handle prompts that are too long? Well, that is because each model has a limitation in its processing capabilities, and starts to suffer once it goes over that limit pretty much like they suffered from some kind of a digital attention deficit disorder. But this could soon change thanks to a new method for supercharging LLM capabilities.
Current LLMs have limited context capacities. For example, ChatGPT taps just 8,000 tokens of context, while Claude handles 100,000. Tokens are the basic units of text or code used by an LLM AI to process and generate language This restricts how much background information they can harness when formulating replies. Abacus AI has developed a method that allegedly doubles the usable context length for open-source LLMs like Metas Llama without compromising the model's accuracy in practical application.
Their technique involves "scaling" the position embeddings that track word locations in input texts. According to their Github page, Abacus AI claims that its scaling method drastically increases the number of tokens that a model can handle.
The researchers evaluated two scaled LlaMA variants on tasks like substring location and open-book QA. The scale 16 model maintained accuracy on real-world examples up to 16,000-word contexts, versus only 2,000 words in baseline Llama. It even showed some coherence at 20,000+ words, something that was not possible to achieve with just fine-tuning techniques.
The significance of context extension cannot be overstated. A narrow context window makes the model accurate but not really usable in complex tasks that require some background. Conversely, with an expanded context, LLMs can process and generate better responses but either take more time to do so or return sup-par results. Handling longer contexts efficiently could enable LLMs to absorb whole documents or multiple documents as background when generating text. This may lead to outputs that are more knowledge-grounded and consistent across long conversations.
However, the gains are not perfectly proportional to the scale factors.
Its still necessary to fine tune strategies because scaling alone doesnt guarantee high quality outputs. The Abacus team is also exploring advanced position encoding schemes from recent papers to further extend context capacity.
Their work suggests that scaling up existing LLMs is a viable path to expanding usable context length. This could democratize access to Large Language Models capable of handling lots of context at once.
Abacus AI has opened the doors of their repository for research purposes only, sharing code specific to their fine-tuning projects. This makes it possible to further iterate on its development and apply the fine tuning methods on virtually any open source Large Language Model.
With applications from personalized chatbots to creative writing aids, more memory-empowered LLMs could soon enable next-generation AI assistants that are conversant across diverse topics. For now, researchers are progressing rapidly to overcome technical constraints in pursuit of artificial general intelligence meaning, generalized human cognitive abilities in an AI model. Maybe someday our digital friends will handle as many tabs as we humans can, but without the headache!
Read more here:
AI Researchers Claim They Can Double the Efficiency of Chatbots - Decrypt
Are we destined to reiterate Oppenheimers missteps in the development of AI? – Times of India
The stirring narrative encapsulated in the recent 2023 film Oppenheimer echoes a historical recollection that holds vital lessons for our present-day world teetering on the brink of its next major technological shift Artificial General Intelligence (AGI). The film narrates the tale of J. Robert Oppenheimer, the architect of the atomic bomb, a man ensnared in an ethical conundrum that left him tormented by the potential devastation of his creation.
Oppenheimers story is a stark reminder that technology, unbridled and unchecked, can precipitate a Pandoras box of unwelcome outcomes. The atomic bomb, conceived in a world fraught with the horror of World War II, ostensibly a deterrent to further global conflict, obliterated Hiroshima and Nagasaki in a cataclysmic spectacle of power, annihilating hundreds of thousands of innocent lives.
The question thus raised is, are we poised to repeat such a grievous oversight with the genesis of Artificial General Intelligence?
AGI, in the simplest terms, can be defined as a class of artificial intelligence equal to human cognitive capabilities. It is an entity capable of learning, reasoning, comprehending and performing tasks at a level indistinguishable from a human being. It promises a utopian future where intractable global issues such as climate change and poverty could potentially be alleviated. However, just as the dawn of the nuclear age held both the promises of atomic energy and the threat of nuclear warfare, the emergence of AGI holds a similarly dual-edged sword.
The digital age presents us with the challenge to wield this formidable technology with caution and reverence, lest it transforms into a potential Digital Oppenheimer. The destructive capabilities of AGI, when weaponised, could be exponentially more devastating than any conventional armament. The danger lies not just in its misuse but also in the potential for an uncontrollable, self-evolving AGI that might eventually surpass our understanding and control.
To avert such a scenario, we mustnt lose sight of the invaluable lessons from Oppenheimers narrative. It underscores the consequence of unchecked ambition and the dire need for ethical foresight. It echoes the warning that the unbridled pursuit of power can culminate in unforeseen catastrophe.
We must endeavour to instil safety and ethics as foundational elements in the development of AGI. Just as nuclear non-proliferation treaties were vital to managing the proliferation of nuclear weapons, we require international consensus and regulations to impede the militarisation of AGI. Furthermore, significant investment is required to probe the ethical quandaries surrounding AGI and its potential societal impact.
Transparency and public engagement are crucial components of ethical AI development. It is only through open dialogues and the fostering of public trust that we can hope to navigate the tumultuous journey to AGI without seeding mass apprehension.
Moreover, educating the public about the potential risks and benefits of AGI is paramount. A well-informed populace can constructively participate in shaping policy discourse and ensure that the development of AGI aligns with societal interests rather than being monopolised by a handful of corporate or state entities.
Developing ethical guidelines for AI use is of equal importance. These should not merely be prescriptive but must be woven into the very fabric of AGI development. They should address not just how AI should be used, but also the moral implications of its application, the equity of access, and its potential to impact employment, privacy, and society at large.
In conclusion, the development of AGI, a daunting challenge, is a responsibility we cannot shirk. It offers the promise of solving some of humanitys most complex problems, but it also carries the potential for unimaginable destruction. We must strive to not replicate the mistakes of Oppenheimer, ensuring that we build robust safety mechanisms, foster transparency, and engage in a public discourse that shapes the development and use of AGI.
The future of AGI may be uncertain, but it is our responsibility to ensure that it is used for good. Its incumbent upon us to channel our intelligence and compassion to shape a future where AGI is a force for good rather than a harbinger of destruction. We must strike a careful balance between ambition and caution, ensuring that AGI becomes a tool for progress, not a weapon for mass destruction.
The endeavour of harnessing AGI is complex, but it is a task we must undertake if we wish to safeguard our future. We must learn from our past missteps and build a culture of safety, transparency, and ethics in AGI development. We must strive to create an AI that benefits humanity, not one that poses a threat to our very existence. The lessons from Oppenheimers tale must guide us towards a future where AGI is a tool for good, a tool for progress, a tool for humanity.
Views expressed above are the author's own.
END OF ARTICLE
Here is the original post:
Are we destined to reiterate Oppenheimers missteps in the development of AI? - Times of India
Restb.ai Partners with Bradford Technologies to Accelerate … – GlobeNewswire
BARCELONA, Spain, Aug. 07, 2023 (GLOBE NEWSWIRE) -- Restb.ai, a global leader in artificial intelligence (AI) and computer vision solutions for the real estate industry, today announced a strategic partnership with Bradford Technologies, a pioneer in providing innovative solutions for Valuation Professionals. As part of the partnership, Restb.ai's advanced computer vision and machine learning technology will be integrated into Bradford Technologies' report quality control processes.
This collaboration, announced today from the Valuation Expo in Las Vegas, marks a significant step in the enhancement of appraisal modernization. Integrating Restb.ai's cutting-edge AI technology with Bradford's solutions aims to automate and bolster quality control processes, including detecting image issues such as out-of-focus images and other problematic content.
Working with Bradford, we are providing the newest and most advanced AI technology to accelerate the modernization of the appraisal process, said Tony Pistilli, General Manager, Valuations for Restb.ai and one of the appraisal industrys most respected voices.
Restb.ais Visual Insights artificial intelligence technology takes a photo and converts it into detailed information. This is a service every appraiser needs to improve their inspection efficiency, said Jeff Bradford, CEO of Bradford Technologies. "We have been providing innovative services to our customers for over 35 years and are proud to partner with restb.ai to again, provide a trailblazing service to our appraiser customers, he added.
Restb.ai's quality control solution is a key component of its new Valuation Product Suite, an innovative product lineup designed explicitly for the appraisal industry. The suite encompasses advanced technology for Comparable Properties, Data Collection, Form Pre-Population/Validation, and Restb.ais proprietary Appraisal Complexity Score.
Restb.ai is unveiling its new Valuation Product Suite at the 20th Annual Valuation Expo, held August 7-9 at the Mirage Hotel in Las Vegas. Nathan Brannen, Chief Product Officer at Restb.ai and one of real estates most experienced artificial intelligence veterans, is scheduled to speak at the expo.
About Restb.aiRestb.ai, the leader in AI and computer vision for real estate, provides image recognition and data enrichment solutions for many of the industrys top brands and leading innovators. Its advanced AI-powered technology automatically analyzes property imagery to unlock visual insights at scale that empowers real estate companies with relevant and actionable property intelligence. Restb.ai is like having a real estate expert instantly research and provide a deep insight into each of the 1 million property photos uploaded daily.
About Bradford TechnologiesFor over 35 years, Bradford Technologies has been dedicated to producing innovative solutions for Valuation Professionals. Ranging from ClickFORMS, the most intuitive appraisal application available to cutting-edge Computer-Aided Appraisal Software. Bradford Technologies continues to develop trailblazing products that set the benchmark for the industry. For more information on Bradford Technologies, call 800-622-8727 or visitwww.BradfordSoftware.com.
For more information on Restb.ai, visit its website. For Restb.ai-related media inquiries, please reach out to Maya Makarem at contact@restb.ai or maya@restb.aior Kevin Hawkins at 1-206-866-1220 or kevin@wavgroup.com.
Media contacts:Kevin Hawkins 1+ (206) 866-1220kevin@wavgroup.comorMaya Makaremmaya@restb.ai
Photosaccompanying this announcementare available at https://www.globenewswire.com/NewsRoom/AttachmentNg/fd6b8759-59fe-4432-9087-066e942729c8https://www.globenewswire.com/NewsRoom/AttachmentNg/ff6609dc-c683-4b4f-93d5-8a5b1968d1bahttps://www.globenewswire.com/NewsRoom/AttachmentNg/2b2dda5b-01ea-4d8b-8de9-f915740ac097
Read this article:
Restb.ai Partners with Bradford Technologies to Accelerate ... - GlobeNewswire
Understanding Artificial Intelligence: How it Works and its Applications – Fagen wasanni
Artificial intelligence (AI) has become a prominent term in todays ever-evolving technological world. It encompasses a range of technologies, from self-driving cars to voice assistants and recommendation systems, shaping our daily lives. But what exactly is AI and how does it work?
At its core, AI is the replication of human intelligence in computers. It involves the development of algorithms and models that enable machines to perform tasks that typically require human cognitive abilities. AI can be categorized into different levels, such as Narrow AI, which focuses on specific tasks, and General AI, which possesses human-like intelligence across multiple domains.
Machine learning is a fundamental concept within AI. It entails training computers to learn from data without explicit programming. Supervised learning involves training models on labeled data, while unsupervised learning identifies patterns in unlabeled data. Reinforcement learning allows agents to learn from interactions with their environment and receive rewards or penalties.
Deep learning, a subset of machine learning, employs neural networks inspired by the human brain. These networks consist of interconnected nodes that process and transmit information. Through training and backpropagation, neural networks are capable of recognizing complex patterns and features.
Natural Language Processing (NLP) is another important aspect of AI, enabling machines to understand, interpret, and generate human language. Sentiment analysis can assess the emotional tone of text, while language generation can produce coherent and human-like text.
In the field of computer vision, AI can interpret and comprehend visual information. Image classification helps categorize images into classes, object detection identifies and locates objects within images, and facial recognition can recognize individuals based on their facial features.
AI finds applications in various industries. In healthcare, it aids in diagnosis and treatment planning. Finance benefits from AI algorithms that analyze market data for trading decisions. The automotive industry utilizes AI for self-driving cars, and the entertainment sector employs AI for content recommendation.
Data plays a crucial role in AI development. Large datasets are used to train AI models, and data preprocessing ensures the quality and relevance of the data. Ethical considerations, such as algorithmic bias, data privacy, and potential job displacement, are important factors to consider in AI development.
Looking ahead, AI research continues to advance, with potential breakthroughs in creative AI and improved human-AI collaboration. As AI becomes increasingly integrated into our lives, it is crucial to prioritize ethical considerations and responsible AI development.
See the rest here:
Understanding Artificial Intelligence: How it Works and its Applications - Fagen wasanni
AutogenAI’s Innovative AI Tool for Bids and Pitches Raises $22.3M … – Sociobits
AutogenAI, a cutting-edge startup based in London, has introduced a groundbreaking generative AI tool that has the potential to reshape the way businesses craft bids and pitches, potentially revolutionizing the entire proposal-writing landscape. In todays ever-evolving business world, where AI is playing an increasingly significant role, AutogenAI stands at the forefront of the market, offering a powerful solution to improve the likelihood of winning coveted contracts.
The importance of a well-constructed proposal in securing business contracts cannot be overstated. A successful bid or pitch can be the key factor that tips the scales in favor of a company. However, writing a winning proposal is often an arduous and time-consuming task that requires meticulous attention to detail and a deep understanding of the target market and clients needs.
Since its inception in 2022, AutogenAI has already made significant strides, amassing over 28 clients in a remarkably short period. These clients hail from various industries, including global management consultancies, BPO organizations, construction companies, facilities management businesses, and even charities and non-profit organizations seeking grant funding. Clearly, AutogenAIs appeal spans across diverse sectors, making it an indispensable tool for organizations of all shapes and sizes.
However, despite AutogenAIs rapid success, some clients remain cautious about revealing their use of AI tools for proposal writing, fearing potential concerns about the quality and authenticity of their work. Nevertheless, AutogenAIs track record and the trust it has garnered from its diverse clientele speak volumes about the tools efficiency and efficacy.
The inner workings of AutogenAIs AI-driven platform follow a three-step process:
The benefits of incorporating AutogenAI into the proposal-writing process are manifold:
In July 2023, AutogenAI achieved another significant milestone by raising $22.3 million in a Series A funding round led by Blossom Capital. This substantial investment will undoubtedly fuel AutogenAIs expansion, allowing the company to recruit top talent, enhance its platform, and widen its customer base further. With an estimated valuation reaching hundreds of millions, AutogenAI is rapidly cementing its position as a force to be reckoned with in the AI-powered proposal-writing domain.
In addition to its impressive array of features, AutogenAI has caught the attention of investors like Blossom Capital due to its potential to reduce procurement costs by approximately 10%. By automating proposal writing, businesses can cut down on expenses related to human labor and optimize overall deal margins. Additionally, AutogenAI expedites the process of creating initial draft bids by a staggering 70%, allowing bid teams to focus on refining strategies and improving the quality of their submissions.
Sean Williams, the visionary founder and CEO of AutogenAI, emphasizes that the platforms prowess stems from its amalgamation of large language models (LLMs) from notable entities like OpenAI, combined with clients structured and unstructured proprietary data. This unique combination empowers AutogenAI to create pitches based on a companys most successful past endeavors. While other AI companies may be focused on developing AGI (Artificial General Intelligence) and superintelligence, AutogenAI narrows its focus to delivering concrete, real-world results for its customers.
With companies like Blossom Capital keenly observing the impressive ROI (Return on Investment) potential of AI-driven solutions like AutogenAI, the future looks bright for the integration of AI in proposal writing and other business processes. As the adoption of AI technology continues to grow across industries, AutogenAI is poised to emerge as a leading player in the market, fundamentally transforming the way businesses approach bid and proposal writing.
AutogenAIs rapid ascent in the world of AI-powered proposal writing is a testament to the tools efficiency and the value it brings to businesses across various sectors. Its innovative approach, driven by data analysis and generative AI, has the potential to disrupt traditional proposal-writing practices and elevate the success rates of businesses in securing contracts and opportunities. As the market continues to embrace AI-driven solutions, AutogenAIs growth trajectory and dedication to its customers position it as a true trailblazer in the realm of AI technology. As we embark on a new era of intelligent automation, AutogenAI is undoubtedly leading the way to a more efficient, productive, and successful future for businesses worldwide.
Continued here:
AutogenAI's Innovative AI Tool for Bids and Pitches Raises $22.3M ... - Sociobits
Who is really responsible for ethical AI? – Mobile World Live
In January, TIME revealed Microsoft-backed OpenAI outsourced workers in Kenya in late 2021 to moderate internet data, forming a fundamental part of developing generative AI sensation ChatGPTs safety system.
According to documents seen by the news outlet, moderators earned around $2 per hour to label texts which detailed injuries, sexual abuse and self-harm. These workers were also asked to collect images, some reportedly illegal under US law, in a separate project for OpenAIs image generator DALL-E.
In a statement to TIME, OpenAI explained it took the well-being of its contractors very seriously and that programmes offering support were available through the outsourcing company, which believed its employees did not request for support through the right channels.
The work involved was so traumatic that the company dealing with OpenAI to outsource the jobs cut short its contract with the AI powerhouse, a recent Wall Street Journal article indicated. Meanwhile, a growing body of research continues to reveal the dependence of big technology companies to conduct precious work in the global south as part of a mission to make AI safe.
Surveys conducted over the years also revealed General Purpose AI deployed in biometrics, policing and housing systems have already caused gender and racial discrimination.
As ChatGPT began to fully take off, the recent dismissal of Microsofts responsible AI team raised eyebrows and questions of whether ethical concerns are actually a priority in the multibillion-dollar AI economy.
That is not to say the technology sector as a whole is not taking the risks around generative AI seriously.
Major industry figures did indeed call for a pause in the technologys developments until a robust AI act is in place. However, researchers speaking to Mobile World Live (MWL) believe the public should look a little further beyond policymaking.
Abid Adonis, researcher at Oxford Internet Institute, argues the task of ensuring ethical AI needs to be expanded.
Now, we only see two powers: regulators and big tech, but we also have civil society and scholars. And its important to hear what marginalised groups say about this because its missing from the discussion.
False AIThis view resonates with Dr Alison Powell, associate professor in Media and Communications at the London School of Economics and Political Science and director of JustAI network at the Ada Lovelace Institute.
Powell told MWL the emphasis on artificial general intelligence which industry heavyweights claimed can eclipse humans cognitive abilities and therefore dominate job markets is already in itself harmful.
Its harmful because it focuses on an imagined world rather than the actual world we live in.
This is particularly reflected in Large Language Models (LLM) built on internet data. Powell pointed out that while there are a lot of languages spoken in the actual world, English is largely dominant on the internet.
In the world, there are many ways that people experience things, express ourselves and work together. Not all of these are present online.
Powell further warned about the hype around AIs decision-making abilities and suggested the technologys powers do not take into account social responsibilities.
This somewhat makes sense when considering the fact generative AI posterchild ChatGPT falsely accused law professor Jonathan Turley of assaulting a student and made up a story about the death of Alexander Hanff, a privacy technologist who helped craft GDPR.
Other examples include data-filtering practices in GPT-3, which used a classification system to automatically discard obscene and inappropriate material.
Further flaws in LLM were highlighted in a recent report by The Washington Post, which stated tech companies had grown secretive about what they feed the AI, such as using data from websites that could be deemed discriminatory.
This backed up a study from 2021, which found generative AI has the potential to amplify privileged views, pointing to GPT-2s training data extracted from Reddit, Twitter and Wikipedia, all of which have predominantly male users.
Cultural machinePowell stressed the need to understand the social aspects where technology is more likely to cause harm before considering how to make it more ethical.
AIs are institutional machines, theyre social machines and theyre cultural machines, she argued.
If were walking away from saying, How do we do this technically, in the gears? then we produce that double bind. But if we take a step back, then we notice all of these systems are institutional systems. Thinking about making systems work along the lines of justice and inclusion is about not how the machines work, but how institutions work.
Adonis added a nuanced public discussion on ethical technology will continue to play a strong role in future innovations and policymaking.
If we build strong, fundamental discourses in many places on something we know will have detrimental effects to society, it will permeate into stakeholders and state actors. They will know what to do, and civil society will know what to do.
I believe discourse and paradigm will shape the corridors of innovation.
For Powell, AI governance means enforcing existing laws, particularly those relating to data protection, anti-discrimination and human rights that apply to the institutional settings in which you put AI.
I would continue to advocate for thinking about institutional settings employing AI, rather than thinking about it as an object of regulation itself, she added.
The editorial views expressed in this article are solely those of the author and will not necessarily reflect the views of the GSMA, its Members or Associate Members.
Originally posted here:
Who is really responsible for ethical AI? - Mobile World Live
The pleasure and pain of living with AI – The New Indian Express
By Pratik Kanjilal| Published: 05th August 2023 12:01 AMImage used for representational purpose only.
Doomsday stories about rogue artificial intelligence focus on the singularity, an inflexion point at which humans perceive an existential threat because the AI has surpassed their cognitive powers, and they fear the loss of control over themselves and their future. This usually happens in other-worldly situationsin The Terminator, the crisis is in the future, and in 2001: A Space Odyssey, its in interplanetary space. Real-world disaster scenarios have AIs taking over nuclear facilities, tripping national power grids, and hijacking telecom systems and satellites using means that brush aside human controls. But who could have imagined that Hollywood writers, actors and people working in broadcast media would be the first to experience the singularity? Its very appropriate, though, since they are the very people who have brought AI dystopias to life for us.
The US film industry is in the midst of a historic strike led by the Writers Guild of America and the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA). Its effects are being seen in the biggest venuesthe cast of Oppenheimer walked out of the London premiereand in niche events like the San Diego Comic-Con, which Hollywood A-listers skipped this time. Some of the strikers are no doubt driven by Luddite futility. Movies and media have become hugely technology-dependent, and artists will be no more successful in holding off the newest wave than Canute was, even if it threatens their livelihoods. But they should be successful in negotiating better terms for how the industry appropriates their talent because right now, its like the Wild West.
Performing artists are particularly vulnerable because apart from their cultural products, their voices, their bodies, and their very selves are easily appropriated by AIs. In March, an AI researcher used a machine to sing a verse in the manner and voice of Kanye West, and some critics said that it was better than the rapper himself, which must have been pretty crushing. Industry specialists hazard that within a couple of years, every popular singer will have several AI models trained in their worka Taylor Swift studio version, stadium version, and stadium in the rain version (she does sing in the rain). Anyone could write a song and cut a track in any version of her voice. Without legal curbs on machine-made clones of human work, they would be free to exploit it commercially. Upon which, Ms Swift would have three choices: One, stop expecting revenue from streaming and the sale of albums and earn mainly from concerts like the Grateful Dead did. Two, spend the rest of her life suing people globally, which is a depressing choice. Or three, go on strike to seek terms under which she is properly compensated for third parties using her capital.
The writers and artists on strike in America have chosen option three. While singers can lose control of their voices to AIs, writers can be almost totally removed from the picture, and actors can have their entire bodies taken over. These people have rebelled because they face an existential crisis due to studios and owners of production facilities not clarifying how they are to live with AIor will they not survive at all?
While the Hollywood strike has all the eyeballs because its Hollywood, Meta has quietly upended the AI industry by open-sourcing its AI Llama-2you can install it on your laptop and take it offline. Open-sourcing is a smart step. Social media is buzzing with AI experts trying to sell you an interface or some promptsverbal instructions to get an AI to do what you wantbecause they know the gold rush wont last. Artificial general intelligence (a machine you can talk to) will become the default gateway to the internet. That would make it critical infrastructure, and governments would have to guarantee public access to it. AI will be democratised, as Meta anticipates, and will grow faster because of it.
Following the open-sourcing of Llama-2, Nick Clegg, former UK deputy prime minister and president for global affairs at Meta, has been talking up its advantages in India, which is Metas biggest market. It is one way of allaying anxieties about AI. An AI can go rogue only via an unforeseen outcomefor instance, if programmers and trainers have not anticipated an unwanted activity. For instance, it is a good policy to tell chatbots to reply to queries only as an assistant and never as a system user. It prevents machines from playing boss. But unexpected issues can also lurk in the source code of the machine. If the code is open to inspection and manipulation by thousands of programmers and researchers everywhere, the issues are much likelier to be caught and patched than by a handful of bug-chasers employed by a corporation.
The most visible problem with AI chatbots is what the industry calls hallucinationslike propagandists, machines make up facts to substantiate their thinking and even provide Borgesian references to books which do not exist. Open-sourcing is likely to contain such issues faster, leaving humans to deal with the internets real problem, which mass access to AIs is bound to increasefake news created by humans themselves. The problem is never technology, which is value-neutral. The problem is the rogues at the console, who use technology like Archimedes lever to multiply their destructive power.
Pratik Kanjilal
The India CableEditor of
(Tweets @pratik_k)
Read more:
The pleasure and pain of living with AI - The New Indian Express
AI and Machine Learning set to drive India’s $8 billion digital advertising industry, say experts | Mint – Mint
Industry experts predict that artificial intelligence and operational machine learning will have a transformative effect on communication sectors such as advertising, public relations, and content creation, reported PTI.
Industry experts predict that artificial intelligence and operational machine learning will have a transformative effect on communication sectors such as advertising, public relations, and content creation, reported PTI.
As per the report, these technologies, including generative intelligence and machine learning, are expected to significantly impact the Indian digital advertising industry, which is currently valued at approximately USD 8 billion (about 66,142 crore), as stated by Siddharth Jhawar, the General Manager of Moloco in India.
As per the report, these technologies, including generative intelligence and machine learning, are expected to significantly impact the Indian digital advertising industry, which is currently valued at approximately USD 8 billion (about 66,142 crore), as stated by Siddharth Jhawar, the General Manager of Moloco in India.
"Advertisers have long pondered the effectiveness of their ads, many relying on intuition alone. Operational machine learning, however, can revolutionise the USD 8-billion Indian digital advertising industry as it can run thousands of mini experiments to decide which ad creative appeals to which type of users," said Jhawar.
"Advertisers have long pondered the effectiveness of their ads, many relying on intuition alone. Operational machine learning, however, can revolutionise the USD 8-billion Indian digital advertising industry as it can run thousands of mini experiments to decide which ad creative appeals to which type of users," said Jhawar.
Given the diverse linguistic and cultural landscape of India, this holds particular significance, he noted.
Given the diverse linguistic and cultural landscape of India, this holds particular significance, he noted.
Notably, Google has introduced advertising campaign processes and the automatic creation of ads through the utilization of Learning Language Models (LLM) and generative AI within Google Ads.
Notably, Google has introduced advertising campaign processes and the automatic creation of ads through the utilization of Learning Language Models (LLM) and generative AI within Google Ads.
According to Dan Taylor, the Vice President of Global Ads at Google, leading companies such as Myntra, Samsung, HDFC, and Tata AIG have witnessed growth rates of up to 18 per cent through the use of Performance Max. This advertising tool incorporates Google's AI technologies for bidding, budget optimization, audience targeting, creative development, and attribution.
According to Dan Taylor, the Vice President of Global Ads at Google, leading companies such as Myntra, Samsung, HDFC, and Tata AIG have witnessed growth rates of up to 18 per cent through the use of Performance Max. This advertising tool incorporates Google's AI technologies for bidding, budget optimization, audience targeting, creative development, and attribution.
Jhawar also emphasized that companies have the opportunity to leverage deep neural networks, which fuel machine learning, to boost revenue using first-party data. Importantly, this can be achieved while upholding data privacy and delivering personalized experiences to customers.
Jhawar also emphasized that companies have the opportunity to leverage deep neural networks, which fuel machine learning, to boost revenue using first-party data. Importantly, this can be achieved while upholding data privacy and delivering personalized experiences to customers.
Wizikey, a communication software-as-a-service provider, has introduced Imara, an AI Avatar designed for public relations and communications. Imara utilizes generative AI to analyze news data and derive valuable insights. This development is in line with the growing trend of brands incorporating generative AI in their operations, which has also sparked interest and discussions within the workforce.
Wizikey, a communication software-as-a-service provider, has introduced Imara, an AI Avatar designed for public relations and communications. Imara utilizes generative AI to analyze news data and derive valuable insights. This development is in line with the growing trend of brands incorporating generative AI in their operations, which has also sparked interest and discussions within the workforce.
Archana Jain, the Founder and Managing Director of PR Pundit, an integrated communications consultancy firm, emphasized that public relations must embrace the ongoing transformation fueled by AI. She believes that the PR industry is on the brink of experiencing further disruptions in the upcoming years.
Archana Jain, the Founder and Managing Director of PR Pundit, an integrated communications consultancy firm, emphasized that public relations must embrace the ongoing transformation fueled by AI. She believes that the PR industry is on the brink of experiencing further disruptions in the upcoming years.
Jain highlighted the benefits of AI in PR, stating that professionals can now analyze vast amounts of data rapidly and efficiently. This ability enables them to make well-informed decisions and develop more effective PR strategies. Additionally, AI facilitates the creation of innovative content tailored to digitally-focused target audiences. Even in fundamental tasks like media coverage tracking, AI plays a pivotal role in enhancing overall efficiency within the industry.
Jain highlighted the benefits of AI in PR, stating that professionals can now analyze vast amounts of data rapidly and efficiently. This ability enables them to make well-informed decisions and develop more effective PR strategies. Additionally, AI facilitates the creation of innovative content tailored to digitally-focused target audiences. Even in fundamental tasks like media coverage tracking, AI plays a pivotal role in enhancing overall efficiency within the industry.
Google has reportedly been testing a product designed to generate news articles by processing ingested information. Positioned as a tool to assist journalists, the product was showcased to media organizations, including The New York Times, The Washington Post, and News Corp, which owns The Wall Street Journal, according to the report.
Google has reportedly been testing a product designed to generate news articles by processing ingested information. Positioned as a tool to assist journalists, the product was showcased to media organizations, including The New York Times, The Washington Post, and News Corp, which owns The Wall Street Journal, according to the report.
According to a McKinsey study, the implementation of AI has the most significant reported revenue impact on marketing and sales.
According to a McKinsey study, the implementation of AI has the most significant reported revenue impact on marketing and sales.
See the rest here:
IT pros mull observability tools, devx and generative AI – TechTarget
As platform engineering teams increasingly take on enterprise performance management tasks in production, there have been missed opportunities to give developers insights into their applications, experts say.
The instrumentation of application code is an area where platform engineers and SREs have stepped in on behalf of application developers, who aren't as steeped in the complexities of distributed cloud infrastructure systems such as Kubernetes. Analysts have also seen an increase in observability teams, specifically within the platform engineering discipline that connect developers' application performance insights with underlying infrastructure data.
"[There's] a move toward centralizing observability teams and centers of excellence," said Nancy Gohring, an analyst at IDC. "One driver for doing that is to try to control costs -- and one way those teams are trying to control costs is setting up data [storage] quotas for teams."
Such teams don't replace the need for developers to instrument their own application code but have helped ease the burden of managing the ongoing operational costs associated with collecting observability data, Gohring said.
There are some aspects of infrastructure monitoring, too, that developers don't need to concern themselves with, said Gregg Siegfried, an analyst at Gartner. Still, there remains a divide between the interests of platform teams in production observability and the interests of application developers, Siegfried said.
"I see an emergence of tools trying to give developers closer access to that data to give them more insight -- maybe allow them to put better instrumentation into the software," he said. "But none of them have really set the world on fire yet."
It's a commonly understood best practice in observability that developers instrument their own code before it's deployed to production, the better to manage its performance in the "you build it, you run it" mode of DevOps.
"I'm part of the OpenTelemetry End User Working Group. And recently we had somebody come in and talk to our user community about how they work in a company that really fosters an observability culture," said Adriana Villela, developer advocate at observability vendor LightStep, in a presentation at the recent Monitorama Conference. "The wonderful thing about it is that there is a directive from the executive saying, 'Thou shalt do observability and also developers are the ones instrumenting their own code,' which means that if you've got some disgruntled development team saying, 'I don't have time to instrument my code,' tough [s---]."
But some newer entrants to the market and their early customers question whether the devx, or developer experience, with observability needs to be quite so tough.
"Developers being able to add custom metrics to their code or spans or use observability tools is really critical to help developers take ownership of what they run in production," said Joseph Ruscio, a general partner at Heavybit, an early-stage investor in cloud infrastructure startups, in a Monitorama presentation.
However, to a new engineer, the overwhelming number of tools available for observability is "inscrutable and not at all welcoming to someone new to the craft," Ruscio said.
A production engineering team at a market research company is trying to make this task less onerous for developers using Groundcover's new Kubernetes-based APM tool. Groundcover uses eBPF to automatically gather data from Kubernetes clusters and associate it with specific applications, which could eventually replace the language-specific SDKs developers used to instrument applications using incumbent vendor Datadog.
"For what we are calling custom metrics that monitor a specific application's behavior, these will continue to be the responsibility of the developers," said Eli Yaacov, a production engineer at SimilarWeb, based in New York. "But we, the production engineers, can provide the developers the [rest of] the ecosystem. For example, if they are running Kubernetes, they don't need to worry about [instrumenting for] the default CPU or memory. Groundcover collects all this data in Kubernetes without requiring the developers to integrate with anything into their services."
Other emerging vendors also offer automated instrumentation features in debugging tools to instrument developers' apps without requiring code changes. These include Lightrun and Rookout.
Amid this year's general hype about generative AI, observability vendors have been quick to roll out natural language interfaces for their tools, mostly to add a user-friendly veneer over their relatively complex, often proprietary, data query languages. Such vendors include Honeycomb, Splunk, and most recently, Dynatrace and Datadog.
I see an emergence of tools trying to give developers closer access to that to that data to give them more insight -- maybe allow them to put better instrumentation into the software. Gregg SiegfriedAnalyst, Gartner
However, generative AI interfaces are not necessarily an obvious option to improve the developer experience of using observability tools, Siegfried said, as most developers are comfortable working in code.
"They have better things to do with their time than learn how to use an [application performance management] solution," he said.
Long term, generative AI and artificial general intelligence may have a significant effect, Ruscio said. But in the short term, Siegfried said he is skeptical that large language models such as ChatGPT will make a major impact on observability, particularly the developer experience.
Instead, unlike security and production-level systems monitoring, observability has yet to shift very far left in the development lifecycle, and developers would be best served by changing that, Ruscio said during his presentation. New and emerging vendors -- some of which are among Heavybit's portfolio companies -- are working in this area, termed observability-driven development.
"There's this missing mode where, wouldn't it be nice if you had some input when you are actually writing code as to what does this code look like in production?" Ruscio said. "It's cool that when I ship it, I'll get a graph. But why shouldn't I just know now, in my IDE, [how it will perform?]"
Beth Pariseau, senior news writer at TechTarget, is an award-winning veteran of IT journalism. She can be reached at [emailprotected] or on Twitter @PariseauTT.
Originally posted here:
IT pros mull observability tools, devx and generative AI - TechTarget
Convergence of Brain-Inspired AI and AGI: Exploring the Path to Intelligent Synergy – EIN News
FAYETTEVILLE, GA, USA, August 3, 2023 /EINPresswire.com/ -- With over 86 billion neurons, each having the ability to form up to 10,000 synapses with other neurons, the human brain gives rise to an exceptionally complex network of connections that underlie the proliferation of intelligence.
There has been a long-standing pursuit of humanity centered around artificial general intelligence (AGI) systems capable of achieving human-level intelligence or even surpassing itenabling AGI to undertake a wide range of intellectual tasks, including reasoning, problem-solving and creativity.
Brain-inspired artificial intelligence is a field that has emerged from this endeavor, integrating knowledge from neuroscience, psychology, and computer science to create AI systems that are not only more efficient but also more powerful. In a new study published in the KeAi journal Meta-Radiology, a team of researchers examined the core elements shared between human intelligence and AGI, with particular emphasis on scale, multimodality, alignment, and reasoning.
Notably, recent advancements in large language models (LLMs) have showcased impressive few-shot and zero-shot capabilities, mimicking human-like rapid learning by capitalizing on existing knowledge, shared Lin Zhao, co-first author of the study. In particular, in-context learning and prompt tuning play pivotal roles in presenting LLMs with exemplars to adeptly tackle novel challenges.
Moreover, the study delved into the evolutionary trajectory of AGI systems, examining both algorithmic and infrastructural perspectives. Through a comprehensive analysis of the limitations and future prospects of AGI, the researchers gained invaluable insights into the potential advancements that lie ahead within the field.
Our study highlights the significance of investigating the human brain and creating AI systems that emulate its structure and functioning, bringing us closer to the ambitious objective of developing AGI that rivals human intelligence, said corresponding author Tianming Liu. AGI, in turn, has the potential to enhance human intelligence and deepen our understanding of cognition. As we progress in both realms of human intelligence and AGI, they synergize to unlock new possibilities.
###
References
Journal Meta-Radiology
DOI 10.1016/j.metrad.2023.100005
Original URL https://doi.org/10.1016/j.metrad.2023.100005
Wendy ChenTranSpread+1 865-405-5638email us here
View original post here:
Convergence of Brain-Inspired AI and AGI: Exploring the Path to Intelligent Synergy - EIN News