Page 1,586«..1020..1,5851,5861,5871,588..1,6001,610..»

Berkeley Talks: Jitendra Malik on the sensorimotor road to artificial … – UC Berkeley

Read the transcript.

FollowBerkeley Talks,aBerkeley Newspodcast that features lectures and conversations at UC Berkeley.

Jitendra Malik, a professor of electrical engineering and computer sciences at UC Berkeley, gave a lecture on March 20 called, The sensorimotor road to artificial intelligence. (Screenshot from video by Berkeley Audio Visual)

In Berkeley Talks episode 164, Jitendra Malik, a professor of electrical engineering and computer sciences at UC Berkeley, gives the 2023 Martin Meyerson Berkeley Faculty Research Lecture called, The sensorimotor road to artificial intelligence.

Its my pleasure to talk on this very, very hot topic today, Malik begins. But Im going to talk about natural intelligence first because we cant talk about artificial intelligence without knowing something about the natural variety.

We could talk about intelligence as having started about 550 million years ago in the Cambrian era, when we had our first multicellular animals that could move about, he continues. So, these were the first animals that could move, and that gave them an advantage because they could find food in different places. But if you want to move and find food in different places, you need to perceive, you need to know where to go to, which means that you need to have some kind of a vision system or a perception system. And thats why we have this slogan, which is from Gibson, We see in order to move and we move in order to see.

For a robot to have the ability to navigate specific terrain, like stepping stones or stairs, Malik says, it needs some kind of vision system.

But how do we train the vision system? he asks. We wanted it to learn in the wild. So, here was our intuition: If you think of a robot on stairs, its proprioception, its senses, its joint angles can let it compute the depth of its left leg and right leg and so on. It has that geometry from its joint angles, from its internal state. So, can we use it for training? The idea was the proprioception predicts the depth of every leg and the vision system gets an image. What we asked the vision system to do is to predict what the depth will be 1.5 seconds later.

That was the idea that you just shift what signal it will know 1.5 seconds later and use that to do this advanced prediction. So, we have this robot, which is learning day by day. In the first day, its clumsy. The second day, it goes up further. And then, finally, on the third day, you will see that it makes it all the way.

Maliks lecture, which took place on March 20, was the first in a series of public lectures at Berkeley this spring by the worlds leading experts on artificial intelligence. Other speakers in the series will include Berkeley Ph.D. recipient John Schulman, a co-founder of OpenAI and the primary architect of ChatGPT; a professor emeritus at MIT and a leading expert in robotics, and four other leading Berkeley AI faculty members who will discuss recent advances in the fields of computer vision, machine learning and robotics.

Watch a video of Maliks lecture below.

Listen to other episodes of Berkeley Talks:

Go here to read the rest:
Berkeley Talks: Jitendra Malik on the sensorimotor road to artificial ... - UC Berkeley

Read More..

Artificial intelligence could reduce barriers to TB care – University of Georgia

A new study led by faculty at the University of Georgia demonstrates the potential of using artificial intelligence to transform tuberculosis treatment in low-resource communities. And while the study focused on TB patients, it has applications across the health care sector, freeing up health care workers to perform other necessary tasks.

Growing evidence has demonstrated the potential for AI to increase productivity, reduce health care worker burnout, and improve quality of care in clinical settings. The study, which was published last month in the Journal of Medical Internet Research AI, pilots the use of AI to watch thousands of submitted videos of TB patients taking their medication.

This application could automate the job of a health care worker watching a patient take their pill at a clinic, known as directly observed therapy (DOT). DOT is acknowledged as the best way to monitor and ensure TB treatment adherence, but this approach places a large time burden on patients and health care workers.

Health care is an ever-growing industry needing a lot of hands. So, if we can put our hands where they must be and free them up to not do things that could be done in another way, I think we can be more efficient and deliver better quality care, said lead author Juliet Sekandi, who specializes in mobile health research at theGlobal Health Instituteat UGAs College of Public Health.

Mobile health technologies have been shown to support clinicians in the battle to control TB in Uganda, which sees around 45,000 new cases per year. Sekandi and colleagues in Uganda launched a successful project in 2018, dubbed DOT Selfie, which harnessed the popularity of selfies to encourage TB patients to submit videos of themselves taking their daily meds.

The patients are willing. Its very acceptable to them because of the convenience and the autonomy it lends to them, she said.

Since its launch, DOT Selfie has generated thousands of videos but who is going to watch all those videos to confirm swallowing of TB medication?

A nurse or provider has to sit behind a computer and open those videos and confirm that somebody is taking their meds, right? Watching people putting pills in their mouth, it can be boring and monotonous, said Sekandi.

And when a clinic is short-staffed, watching submitted videos quickly falls to the bottom of the to-do list, despite how important the monitoring piece is to TB control.

Health care is an ever-growing industry needing a lot of hands. So, if we can put our hands where they must be and free them up to not do things that could be done in another way, I think we can be more efficient and deliver better quality care,

Reading about what AI can do, then I realized, oh, now we can fill that part with an automation process, said Sekandi.

She began working with colleagues from UGAs School of Computing to develop deep learning models that could recognize when patients were taking their medications using nearly 500 videos from her DOT Selfie project.

They tested four models and found the top performing model to accurately review videos and identify patients taking their pills 85% of the time, which is comparable to a human doing the same task, but at much faster speed of a half second per video. The least successful model still performed well, with around 78% accuracy.

So, AI is really an accelerator of that process because then a nurse will not be worried that they have to watch all the 10,000 videos, but maybe watch only a few that need verification, say100 out of 10,000. Juliet Sekandi

This innovation has the potential to boost TB medication adherence, which benefits the patient, curbs TB spread and safeguards effective TB treatment, she said.

It shows the potential of advancing intelligent and personalized health care by exploiting visual information, said co-author Sheng Li, an AI researcher at the University of Virginias School of Data Science, who collaborated with Sekandi on the project while on the faculty at UGA.

Im excited that theres yet another tool to add to our toolkit to be able to plug gaps in the delivery of health care, said Sekandi.

And one of them is really the shortage of human resources. Im not saying that every single shortage will be addressed by AI, but the task at hand is for us to identify those mundane tasks that can actually be handed off.

The paper, Application of Artificial Intelligence to the Monitoring of Medication Adherence for Tuberculosis Treatment in Africa: Algorithm Development and Validation is available online.

Visit link:
Artificial intelligence could reduce barriers to TB care - University of Georgia

Read More..

WGA Would Allow Artificial Intelligence in Scriptwriting, as Long as Writers Maintain Credit – Variety

UPDATED with WGA response.

The Writers Guild of America has proposed allowing artificial intelligence to write scripts, as long as it does not affect writers credits or residuals.

The guild had previously indicated that it would propose regulating the use of AI in the writing process, which has recently surfaced as a concern for writers who fear losing out on jobs.

But contrary to some expectations, the guild is not proposing an outright ban on the use of AI technology.

Instead, the proposal would allow a writer to use ChatGPT to help write a script without having to share writing credit or divide residuals. Or, a studio executive could hand the writer an AI-generated script to rewrite or polish and the writer would still be considered the first writer on the project.

In effect, the proposal would treat AI as a tool like Final Draft or a pencil rather than as a writer. It appears to be intended to allow writers to benefit from the technology without getting dragged into credit arbitrations with software manufacturers.

The proposal does not address the scenario in which an AI program writes a script entirely on its own, without help from a person.

The guilds proposal was discussed in the first bargaining session on Monday with the Alliance of Motion Picture and Television Producers. Three sources confirmed the proposal.

Its not yet clear whether the AMPTP, which represents the studios, will be receptive to the idea.

The WGA proposal states simply that AI-generated material will not be considered literary material or source material.

Those terms are key for assigning writing credits, which in turn have a big impact on residual compensation.

Literary material is a fundamental term in the WGAs minimum basic agreement it is what a writer produces (including stories, treatments, screenplays, dialogue, sketches, etc.). If an AI program cannot produce literary material, then it cannot be considered a writer on a project.

Source material refers to things like novels, plays and magazine articles, on which a screenplay may be based. If a screenplay is based on source material, then it is not considered an original screenplay. The writer may also get only a screenplay by credit, rather than a written by credit.

A written by credit entitles the writer to the full residual for the project, while a screenplay by credit gets 75%.

By declaring that ChatGPT cannot write source material, the guild would be saying that a writer could adapt an AI-written short story and still get full written by credit.

Such scenarios may seem farfetched. But technological advances can present some of the thorniest issues in bargaining, as neither side wants to concede some advantage that may become more consequential in future years.

AI could also be used to help write questions on Jeopardy! or other quiz and audience participation shows.

SAG-AFTRA has also raised concerns about the effects of AI on performers, notably around losing control of their image, voice and likeness.

The WGA is set to continue bargaining for the next two weeks before reporting back to members on next steps and a potential strike. The contract expires on May 1.

The WGA did not respond to requests for comment. On Wednesday, the guild issued a series of tweets on its AI proposal:

The first tweet sums up the intent of the proposal, which is to regulate AI in such a way to preserve writers working standards. The subsequent tweets, however, differ from the language of the proposal.

The entirety of WGA proposal reads: ARTIFICIAL INTELLIGENCE AND SIMILAR TECHNOLOGIES: Provide that written material produced by artificial intelligence programs and similar technologies will not be considered source material or literary material on any MBA-covered project.

The guilds tweets say something else, referring to how AI material is used rather than how it is considered. The tweets say that AI material cannot be used as source material and that AI cannot generate covered literary material. The proposal states only that AI material if used will not be considered as literary or source material.

Those definitions are key to determining credit and residual compensation in the guild contract. By excluding AI material from those definitions, the guild proposal would protect writers from losing a share of credit or residuals due to the use of AI software.

Continued here:
WGA Would Allow Artificial Intelligence in Scriptwriting, as Long as Writers Maintain Credit - Variety

Read More..

State-of-the-Art Artificial Intelligence Sheds New Light on the … – SciTechDaily

By Kavli Institute for the Physics and Mathematics of the UniverseMarch 24, 2023

Figure 1. A schematic illustration of the first stars supernovae and observed spectra of extremely metal-poor stars. Ejecta from the supernovae enrich pristine hydrogen and helium gas with heavy elements in the universe (cyan, green, and purple objects surrounded by clouds of ejected material). If the first stars are born as a multiple stellar system rather than as an isolated single star, elements ejected by the supernovae are mixed together and incorporated into the next generation of stars. The characteristic chemical abundances in such a mechanism are preserved in the atmosphere of the long-lived low-mass stars observed in our Milky Way Galaxy. The team invented the machine learning algorithm to distinguish whether the observed stars were formed out of ejecta of a single (small red stars) or multiple (small blue stars) previous supernovae, based on measured elemental abundances from the spectra of the stars. Credit: Kavli IPMU

By using machine learning and state-of-the-art supernova nucleosynthesis, a team of researchers has found the majority of observed second-generation stars in the universe were enriched by multiple supernovae, reports a new study in The Astrophysical Journal.

Nuclear astrophysics research has shown elements including and heavier than carbon in the Universe are produced in stars. But the first stars, stars born soon after the Big Bang, did not contain such heavy elements, which astronomers call metals. The next generation of stars contained only a small amount of heavy elements produced by the first stars. To understand the universe in its infancy, it requires researchers to study these metal-poor stars.

Luckily, these second-generation metal-poor stars are observed in our Milky Way Galaxy, and have been studied by a team of Affiliate Members of the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) to close in on the physical properties of the first stars in the universe.

Figure 2. Carbon vs. iron abundance of extremely metal-poor (EMP) stars. The color bar shows the probability for mono-enrichment from our machine learning algorithm. Stars above the dashed lines (at [C/Fe] = 0.7) are called carbon-enhanced metal-poor (CEMP) stars and most of them are mono-enriched. Credit: Hartwig et al.

The teams results give the first quantitative constraint based on observations on the multiplicity of the first stars.

Figure 3. (from left) Visiting Senior Scientist Kenichi Nomoto, Visiting Associate Scientist Miho Ishigaki, Kavli IPMU Visiting Associate Scientist Tilman Hartwig, Visiting Senior Scientist Chiaki Kobayashi, and Visiting Senior Scientist Nozomu Tominaga. Credit: Kavli IPMU, Nozomu Tominaga

Multiplicity of the first stars were only predicted from numerical simulations so far, and there was no way to observationally examine the theoretical prediction until now, said lead author Hartwig. Our result suggests that most first stars formed in small clusters so that multiple of their supernovae can contribute to the metal enrichment of the early interstellar medium, he said.

Our new algorithm provides an excellent tool to interpret the big data we will have in the next decade from ongoing and future astronomical surveys across the world, said Kobayashi, also a Leverhulme Research Fellow.

At the moment, the available data of old stars are the tip of the iceberg within the solar neighborhood. The Prime Focus Spectrograph, a cutting-edge multi-object spectrograph on the Subaru Telescope developed by the international collaboration led by Kavli IPMU, is the best instrument to discover ancient stars in the outer regions of the Milky Way far beyond the solar neighborhood, said Ishigaki.

The new algorithm invented in this study opens the door to making the most of diverse chemical fingerprints in metal-poor stars discovered by the Prime Focus Spectrograph.

The theory of the first stars tells us that the first stars should be more massive than the Sun. The natural expectation was that the first star was born in a gas cloud containing a mass a million times more than the Sun. However, our new finding strongly suggests that the first stars were not born alone, but instead formed as a part of a star cluster or a binary or multiple star system. This also means that we can expect gravitational waves from the first binary stars soon after the Big Bang, which could be detected in future missions in space or on the Moon, said Kobayashi.

Hartwig has made the code developed in this study publicly available at https://gitlab.com/thartwig/emu-c.

Reference: Machine Learning Detects Multiplicity of the First Stars in Stellar Archaeology Data by Tilman Hartwig, Miho N. Ishigaki, Chiaki Kobayashi, Nozomu Tominaga and Kenichi Nomoto, 22 March 2023, The Astrophysical Journal.DOI: 10.3847/1538-4357/acbcc6

Excerpt from:
State-of-the-Art Artificial Intelligence Sheds New Light on the ... - SciTechDaily

Read More..

Artificial intelligence is helping researchers identify and analyse … – Art Newspaper

Andrea Jalandoni knows all too well the challenges of archaeological work. As a senior research fellow at the Center for Social and Cultural Research at Griffith University in Queensland, Australia, Jalandoni has dodged crocodiles, scaled limestone cliffs and sailed traditional canoes in shark-infested waters, all to study significant sites in the Pacific, Southeast Asia and Australia. One of her biggest challenges is a modern one: analysing the exponential amounts of raw data, such as photos and tracings, collected at the sites.

Manual identification takes too much time, money and specialist knowledge, Jalandoni says. She set her trowel down years ago in favour of more advanced technologies. Her toolkit now includes multiple drones and advanced imaging techniques to record sites and discover things not apparent to the naked eye. But to make sense of all the data, she needed to make use of one more cutting-edge tool: artificial intelligence (AI).

Jalandoni teamed up with Nayyar Zaidi, senior lecturer in computer science at Deakin University in Victoria, Australia. Together they tested machine learning, a subset of AI, to automate image detection to aid in rock art research. Jalandoni used a dataset of photos from the Kakadu National Park in Australias Northern Territory and worked closely with the regions First Nations elders. Some findings from this research were published last August by the Journal of Archaeological Science.

Kakadu National Park, a Unesco world heritage site, contains some of the most well-known examples of painted rock art. The works are created from pigments made of iron-stained clays and iron-rich ores that were mixed with water and applied using tools made of human hair, reeds, feathers and chewed sticks. Some of the paintings in this region date back 20,000 years, making them among the oldest art in recorded history. Despite its world-renowned status for rock art, only a fraction of the works in the park have been studied.

For First Nations people, rock art is an essential aspect of contemporary Indigenous cultures that connects them directly to ancestors and ancestral beings, cultural stories and landscapes, Jalandoni says. Rock art is not just data, it is part of Indigenous heritage and contributes to Indigenous wellbeing.

An example of artificial intelligence extracting a figure from a rock art photo Courtesy Andrea Jalandoni

For the AI study, the researchers tested a machine learning model to detect rock art from hundreds of photos, some of which showed painted rock art images and others with bare rock surfaces. The system found the art with a high degree of accuracy of 89%, suggesting it may be invaluable for assessing large collections of images from heritage sites around the world.

Image detection is just the beginning. The potential to automate many steps in rock art research, coupled with more sophisticated analysis, will speed up the pace of discovery, Jalandoni says. Trained systems are expected to be able to classify images, extract motifs and find relationships among the different elements. All this will lead to deeper knowledge and understanding of the images, stories and traditions of the past.

Eventually, AI systems may be able to be trained on more complex tasks such as identifying the works of individual artists or virtually restoring lost or degraded works.

This is important because time is of the essence for many ancient forms of art and storytelling. In areas where numerous rock art sites exist, much of it is often unidentified, unrecorded and unresearched, Jalandoni says. And with climate change, extreme weather events, natural disasters, encroaching development and human mismanagement, this inherently finite form of art and culture will continue to become more vulnerable and more rare.

Jannie Loubser, a rock art specialist and a cultural resource management archaeologist from conservation group Stratum Unlimited, sees another important use for AI in conservation and preservation. Trained systems will help monitor imperceptible changes to surfaces or conditions at rock art sites. But, he adds, ground truthingstanding face-to-face with the workwill always be important for understanding a site.

Jalandoni concurs that there is nothing like the in-person study of works created by artists thousands or tens of thousands of years ago and trying to understand and acknowledge the story being told. But she sees great potential in combining her new and old tools to explore and document difficult-to-reach sites.

Martin Puchner, author of Culture: The Story of Us, From Cave Art to K-Pop (2023), sees a poetic resonance in the use of AI, the most contemporary of tools, to reveal the past.

Even as we are moving into the future we are also discovering more about the past, sometimes through accidents when someone discovers the cave, but also, of course, through new technologies, Puchner says.

Go here to read the rest:
Artificial intelligence is helping researchers identify and analyse ... - Art Newspaper

Read More..

Explained | Artificial Intelligence and screening of breast cancer – WION

COMMERCIAL BREAK

SCROLL TO CONTINUE READING

Artificial Intelligence (AI) has been in the news in recent months with many questioning whether it will replace humans in the workforce in the future. Many people globally have started using AI for tasks such as writing emails, article summaries, cover letters, etc. AI is also being used in the field of medicine to search medical data and uncover insights to help improve health outcomes and patient experiences.

Cancer- a disease in which some of the bodys cells grow uncontrollably and spread to other parts of the body- continues to plague countries. And among all types of cancer, breast cancer is the most common type of canceroccurring in women globally. Several factors including genetics, lifestyle, and the environment have contributed to the rise in the prevalence of breast cancer among women.

Proper screening for early diagnosis and treatment is an essential factor when combating the disease.

According to a report published in the PubMed Central (PMC) journal in October last year, faster and more accurate results are some of the benefits of AI methods in breast cancer screening.

Breast cancer is more effective to treat if diagnosed early and the effectiveness of treatment in the later stages is poor. The report in the PMC titled- "Artificial Intelligence in Breast Cancer Screening and Diagnosis" says that the incorporation of AI into screening methods is a relatively new and emerging field thatshows a lot of promise in the early detection of breast cancer, thus resulting in a better prognosis of the condition.

"Human intelligence has always triumphed over every other form of intelligence on this planet. The defining feature of human intelligence is the ability to use previous knowledge, adapt to new conditions, and identify meaning in patterns. The success of AI lies in the capacity to reproduce the same abilities," it adds.

Incorporating AI into the screening methods such as the examination of biopsy slides enhances the treatment success rate. Machine learning and deep learning are some of the important aspects of AI which are required in breast cancer imaging.

Machine learning is used to store a large dataset, which is later used to train prediction models and interpret generalisations. On the other hand, deep learning- the newest branch of machine learning- works by establishing a system of artificial neural networks that can classify and recognise images, as per the report.

Regarding breast cancer treatment, the use of AI for early detection by making use of data obtained by radiomics and biopsy slides is done. This is backed by a global effort to manufacture learning algorithms for understanding mammograms by reducing the number of false positives as an outcome.

"AI has increased the odds of identifying metastatic breast cancer in whole slide images of lymph node biopsy. Because people's risk factors and predispositions differ, AI algorithms operate differently in different populations," the report further says.

AI seems a very helpful tool when it comes to treating cancer. It has shown impressive outcomes and there is a possibility that it can change every method of treatment which is used presently. However, there are some challenges.

The report, published in the PMC journal in October last year, says that a concerning question is where can one draw the line between AI and human intelligence. "AI is based on data collected from populations. Therefore, a disparity is sure to rise when it comes to the development of data from people belonging to different socio-economic conditions," it adds and points out that cancer is one particular disease that has indices that vary across different races.

Studies relating to the efficiency of AI have certain set outcomes that can be used to assess their standards and credibility. And for AI machines to be accepted, people must be able to independently replicate and produce the machine like any other scientific finding. This implies a common code must be available to all, and it is only possible if data is shared with everyone equally.

AI models used for managing cancer are centred on image data, and the report says the problem with this aspect is the underutilisation of patient histories saved as electronic health records in hospitals.

"Easy-to-access databases and user-friendly software must be incorporated into the software systems of hospitals worldwide, which is a difficult task at the moment."

One of the biggest challenges is building trust among doctors to make their decisions with the help of AI, and adequate training must be provided to doctors on how to use this technology.

Another challenge is that there are a lot of ethical risks to consider while using AI methods which include data confidentiality, privacy violation, the autonomy of patients, and consent. But the report said that many measures are taken to prevent any violation of confidentiality and legislation to keep a check on malpractices.

WATCH WION LIVE HERE

You can now write for wionews.com and be a part of the community. Share your stories and opinions with us here.

Read more:
Explained | Artificial Intelligence and screening of breast cancer - WION

Read More..

Task Force To Convene Conversations on Artificial Intelligence’s … – UVA Today

Other members of the task force include Gabrielle Bray, a fourth-year student who chairs the Honor Committee; T. Kenny Fountain, an associate professor of English and director of Writing Across the Curriculum in the College and Graduate School of Arts & Sciences; Briana Morrison, an associate professor of computer science in the School of Engineering and Applied Science; Reza Mousavi, an assistant professor of commerce in the McIntire School of Commerce; and Michael Palmer, director of the Center for Teaching Excellence.

The task force will hold a series of virtual town hall meetings, organized by school. Faculty and students may attend any session. Participants may register for each session on the task force website. Each session is limited to 300 participants. If there is enough demand, more sessions may be added.

The scheduled sessions are: Arts & Sciences (natural sciences), March 27, 3 to 4:30 p.m.; Architecture, Batten, Education and Human Development, and Nursing, March 29, noon to 1:30 p.m.; Engineering and Applied Sciences and Data Science, April 10, 3 to 4:30 p.m.; Arts & Sciences (social sciences), April 11, 12:30 to 2 p.m.; Arts & Sciences (arts and humanities) and Professional and Continuing Education, April 12, noon to 1:30 p.m.; and Darden, Law and McIntire, April 14, 2 to 3:30 p.m.

The link to its questionnaire is available on the task force website.

Both faculty and students are invited to these online town halls, Heny said. We will provide some information and then ask them critical questions that get them to engage, and they will record their responses in a form we will use as a source of data.

We want to learn how students and faculty are actually using this technology in courses, Pennock said. We hear anecdotal evidence from the faculty members who are closest to us, but we really want to understand how our students using it to study as well as to complete assignments. Theres a real opportunity for faculty to make their classes better, to be able to get more work done in the same amount of time.

Originally posted here:
Task Force To Convene Conversations on Artificial Intelligence's ... - UVA Today

Read More..

This project at University of Chicago aims at thwarting artificial intelligence from mimicking artistic styles details – The Financial Express

Anyone who has held paper and a paintbrush knows the effort it goes into making a piece of art. The effort went for a toss last year when our timelines across social media platforms got inundated by AI-generated artworks, stunning yet scary to fathom. Machines replacing human labour is something we have often heard, that it could happen to artists was somewhat inconceivable. And that an AI tool can generate artwork by just mere prompts can leave any artist uneasy.

While artificial intelligence (AI) is doing its thing, an academic research group of PhD students and professors at the University of Chicago, USA, have launched a tool to thwart it. Glaze is their academic research project aimed at thwarting AI from mimicking the style of artists. What if you could add a cloak layer to your digital artwork that makes it harder for AI to mimic? Say hello to Glaze, it says on its website.

Also read: Heres how much Mukesh Ambanis chef earns; Check salary and compensation here

Glaze is a tool to help artists to prevent their artistic styles from being learned and mimicked by new AI-art models such as MidJourney, Stable Diffusion and their variants. It is a collaboration between the University of Chicago SAND Lab and members of the professional artist community, most notably Karla Ortiz. Glaze has been evaluated via a user study involving over 1,100 professional artists, Glazes website reads.

Glaze Beta2 has been made available for download starting March 18.

It is a normal exercise for several artists to post their work online to build a portfolio and even earn from it. However, generative AI tools have been equipped to create artworks in the same style after just seeing a few of the original ones.

This is what Glaze aims to thwart by creating a cloaked version of the original image.

Glaze generates a cloaked version for each image you want to protect. During this process, none of your artwork will ever leave your own computer. Then, instead of posting the original artwork online, you could post the cloaked artwork to protect your style from AI art generators, it says.

The way it happens is, when an artist wants to post her work online but does not want AI to mimic it, she can upload her work, in digital form, to Glaze. The tool then makes a few changes, which are hardly visible to the human eye. We refer to these added changes as a style cloak and changed artwork as cloaked artwork, it says. While the cloaked artwork appears identical to the original to humans, the machine picks up the altered version. Hence, whenever it gets a prompt, say Mughal women in south Delhi in MF Husain style, the artwork generated by AI will be very different from the said artists style. This protects the artistic style to be mimicked without the artists consent.

While Glaze Beta2 is available for download, the research is under peer review.

Glaze, however, has its share of shortcomings. Like changes made to certain artworks that have flat colours and smooth backgrounds, such as animation styles, are more visible. While this is not unexpected, we are searching for methods to reduce the visual impact for these styles, the makers say.

Also, unfortunately, Glaze is not a permanent solution against AI mimicry, they say. It is because, AI evolves quickly, and systems like Glaze face an inherent challenge of being future-proof. Techniques we use to cloak artworks today might be overcome by a future countermeasure, possibly rendering previously protected art vulnerable, they add.

Although the tool is far from perfect, its utility for artists is beyond any doubt. The issue becomes all the more glaring when one considers multiple artists who find it tough to earn a decent living through this craft. The AI companies, on the other hand, many of whom charge a subscription fee, earn millions.

Also read: Akash Ambani, Karan Adani to Ananya Birla: Here are heirs and heiresses of Indias most prominent business empires

Rules and laws are yet to catch up with the pace at which AI is advancing, leaving little for artists to fight with to protect their work. This is where projects like Glaze rise to prominence.

It is important to note that Glaze is not panacea, but a necessary first step towards artist-centric protection tools to resist AI mimicry. We hope that Glaze and followup projects will provide some protection to artists while longer term (legal, regulatory) efforts take hold, it says on Glazes website.

Meanwhile, the technology has already hopped to the next stop. The startup Runway AI has come up with a video generator that generates videos, merely by a prompt.

Originally posted here:
This project at University of Chicago aims at thwarting artificial intelligence from mimicking artistic styles details - The Financial Express

Read More..

How Will Artificial Intelligence Affect Hollywood? – No Film School

Are you ready to see AI doing work in Hollywood?

Artificial Intelligence (AI) is changing the world as we know it, and the film industry is no exception. Hollywood, as a major player in the industry, is increasingly using AI as a tool to enhance the creative process, streamline production, and improve decision-making.

From writing and directing to producing and marketing, AI is being used in various ways to make Hollywood more efficient and effective. However, with these advancements come potential risks and challenges, such as the loss of creative control and the homogenization of output. It's kind of scary to think your job may not be safe because they're going to bring in a computer to do it.

In this article, we will explore how AI is affecting Hollywood and what the future may hold for this evolving relationship between technology and creativity.

Let's take a look at how artificial intelligence can affect Hollywood as a whole.

Let's take a gander at the general way artificial intelligence will change things. Some of this stuff is already happening.

Personalized Content: AI algorithms can analyze data about viewer preferences and make recommendations based on those preferences. This could lead to more personalized content being produced, as studios can use AI to tailor their movies to specific audiences.

Improved Special Effects: AI can be used to create more realistic and impressive special effects. For example, AI can be used to create more realistic facial expressions and movements in computer-generated characters.

Cost Reduction: AI can be used to automate certain aspects of the filmmaking process, such as editing or sound design, which could reduce costs and increase efficiency.

Storytelling: AI could be used to generate storylines and plot points, which could help filmmakers develop new ideas and create more compelling narratives.

Data Analysis: AI can be used to analyze data from box office sales and viewer feedback to help studios make more informed decisions about what types of movies to produce and how to market them.

AI is already having an impact on the film industry and is likely to continue to do so in the future. In terms of writers in Hollywood, AI could potentially have both positive and negative effects.

On the positive side, AI could be used as a tool to help writers with tasks such as generating ideas, developing characters, and even writing dialogue. For example, some companies are already using AI algorithms to analyze audience data and make recommendations about what kind of movies or TV shows are likely to be successful. AI could also be used to create more efficient and effective writing processes, allowing writers to focus more on the creative aspects of their work.

However, there are also potential negative effects of AI on writers in Hollywood.

One concern is that AI could be used to replace human writers altogether, leading to job losses and a loss of creativity in the industry. Another concern is that AI could be used to create content that is formulaic or lacks originality. It might be bad at the nuance of human experience and be limited with how it perceives life.

'Metropolis'Credit: Parufamet

Writers won't be the only ones affected by this new trend. Directors also should have some worries. On the positive side, AI could be used as a tool to help directors with tasks such as pre-visualization, shot planning, and post-production.

For example, AI could be used to create virtual sets, which could help directors to visualize their scenes and make decisions about camera angles and lighting before filming begins. AI could also be used to analyze and edit footage, making the post-production process more efficient and cost-effective.

However, there are also potential negative effects of AI on directors in Hollywood.

One concern is that AI could be used to replace human directors altogether. We would instead have computers trying to tell us about the human experience or estimating emotions they are not complex enough to feel. This could lead toward an overreliance on tropes or the points of view of the people who created the AI, which may not be reflexive as a whole.

'Mr. Robot'Credit: USA Network

When it comes to producing, AI could be used as a tool to help producers with tasks such as predicting audience response, optimizing marketing strategies, and even identifying potential investment opportunities.

For example, AI algorithms could analyze audience data to predict which types of films or TV shows are likely to be successful, helping producers make more informed decisions about what projects to pursue. AI could also be used to analyze marketing data and make recommendations about how to reach and engage audiences more effectively.

In reality, this kind of intelligence might completely eliminate producers. Who needs someone to make calls to package when a computer can send form emails to agents or use its metrics to decide which projects it should be greenlighting?

That just is the start.

'Ex Machina'Credit: A24

I hate being a constant fearmonger on this website, but I don't like the cavalier way people have been talking about artificial intelligence and its applications in Hollywood. It's going to take jobs away from artists and people with taste, and we have to nip that in the bud now before it is too late.

People keep saying that AI is further in the future than you think, but what if I told you that if you read this article, every paragraph up until this section was mostly written by a computer? Yes, I had to go through and polish it and, yes, there were a few mistakes, but we are not as far off from this being the norm as you think.

I legitimately just added the headings and let Chat GPT do the rest, mostly as an experiment. And I think we can agree those answers are well thought out and mimic the way I usually write. They even mimic our website format.

The fact is, when giant corporations buy a bunch of Hollywood companies, they are looking for ways to strip the movie and TV process down. How can we employ fewer people and maximize profits? Well, I think they will do it with computer-generated stories and positions.

That spells less creativity and originality and work for us all.

Let me know what you think about all this in the comments.

Visit link:
How Will Artificial Intelligence Affect Hollywood? - No Film School

Read More..

As per the "Trust in Artificial Intelligence" study, 42% individuals fear … – Digital Information World

Artificial intelligence (AI) has proven helpful to the world in many ways, including the assistants and robots that have taken on many of the tasks associated with daily life and replaced humans in surgical procedures and other professions. Several AI models and tools that are immediately in front of our eyes are ensuring that the world will be a better place, so it's not just the robots that have a beneficial influence on humans, there are models including ChatGPT and Dall-E that have revolutionized the tech industry.

For those who may not be aware, ChatGPT is a chatbot that was introduced in November of 2022. It was created to help users with a variety of tasks. Another significant tech company, "Dell-E," is utilized to produce lifelike images only from a description. They were both created by a business named "OpenAI."

Yet, we are fully aware that it is always bad along with good, therefore to learn more about how people see artificial intelligence, a study titled "Trust in Artificial Intelligence" was conducted during September and October 2022. The University of Queensland and KGPM Australia conducted the study and provided the data on which it was based. A total of 17,193 respondents from seventeen different nations participated in the survey.

There were three separate sections in the survey's poll: "agree," "disagree," and "neutral." Despite everything that has been said about how AI has helped humanity, some individuals still believe that the world would be a far better place without it. 42% of respondents, or two out of every five, agreed with this statement.

What may be the cause of it, then? Several individuals are concerned about their occupations and careers being replaced by AI robots that resemble humans as a result of the study. While 39% of individuals polled denied that AI can take over their future, it's likely that they still believe that some jobs couldn't be replaced by it or that they aren't aware of how rapidly AI's value is increasing. The poll also found that 19% of respondents had a neutral opinion on the matter.

Nonetheless, each person sees the world from a unique perspective. According to the poll, 67% of respondents remain hopeful and upbeat about the future of AI. Even if they are aware of all the negative effects and how they will affect us, some people (57%) are still relaxed about it.

Furthermore, 47% of people report feeling extremely nervous because they fear AI would progressively destroy the human world and that there are very significant risks associated with using AI in daily life, which is not surprising. Also, 24% of them express an angry sentiment against AI and its applications.

Go here to see the original:
As per the "Trust in Artificial Intelligence" study, 42% individuals fear ... - Digital Information World

Read More..