Page 780«..1020..779780781782..790800..»

LLMs cant self-correct in reasoning tasks, DeepMind study finds – TechTalks

Image generated with Bing Image Creator

This article is part of ourcoverage of the latest inAI research.

Scientists are inventing various strategies to enhance the accuracy and reasoning abilities of large language models (LLM) such as retrieval augmentation and chain-of-thought reasoning.

Among these, self-correctiona technique where an LLM refines its own responseshas gained significant traction, demonstrating efficacy across numerous applications. However, the mechanics behind its success remain elusive.

A recent study conducted by Google DeepMind in collaboration with the University of Illinois at Urbana-Champaign reveals that LLMs often falter when self-correcting their responses without external feedback. In fact, the study suggests that self-correction can sometimes impair the performance of these models, challenging the prevailing understanding of this popular technique.

Self-correction is predicated on the idea that LLMs can assess the accuracy of their outputs and refine their responses. For instance, an LLM might initially fail a math problem but correct its answer after reviewing its own output and reasoning.

Several studies have observed this process, also known as self-critique, self-refine, or self-improve.

However, the effectiveness of self-correction is not universal across all tasks. The paper from DeepMind and University of Illinois reveals that the success of self-correction is largely contingent on the nature of the task at hand. In reasoning tasks, self-correction techniques typically succeed only when they can leverage external sources, such as human feedback, an external tool like a calculator or code executor, or a knowledge base.

The researchers underscore the fact that high-quality feedback is not always accessible in many applications. This makes it crucial to understand the inherent capabilities of LLMs and to discern how much of the self-correction can be attributed to the models internal knowledge. They introduce the concept of intrinsic self-correction, which refers to a scenario where the model attempts to correct its initial responses based solely on its built-in capabilities, without any external feedback.

The researchers put self-correction to the test on several benchmarks that measure model performance in solving math word problems, answering multiple-choice questions, and tackling question-answering problems that require reasoning. They employed a three-step process for self-correction. First, they prompt the model for an answer. Next, they prompt it to review its previous response. Finally, they prompt it a third time to answer the original question based on its self-generated feedback.

Their findings reveal that self-correction works effectively when the models have access to the ground-truth labels included in the benchmark datasets. This is because the algorithm can accurately determine when to halt the reasoning process and avoid changing the answer when it is already correct. As the researchers state, These results use ground-truth labels to prevent the model from altering a correct answer to an incorrect one. However, determining how to prevent such mischanges is, in fact, the key to ensuring the success of self-correction.

However, this assumption does not reflect real-world scenarios, where access to the ground truth is not always available. If the ground truth were readily accessible, there would be no need to employ a machine learning model to predict it. The researchers demonstrate that when they remove the labels from the self-correction process, the performance of the models begins to decline significantly.

Interestingly, the models often produce the correct answer initially, but switch to an incorrect response after self-correction. For instance, in GPT-3.5-Turbo (the model used in the free version of ChatGPT), the performance dropped by almost half on the CommonSenseQA question-answering dataset when self-correction was applied. GPT-4 also exhibited a performance drop, albeit by a smaller margin.

According to the researchers, if the model is well-aligned and paired with a thoughtfully designed initial prompt, the initial response should already be optimal given the conditions of the prompt and the specific decoding algorithm. In this case, introducing feedback can be viewed as adding an additional prompt, potentially skewing the models response away from the optimal prompt. In an intrinsic self-correction setting, on the reasoning tasks, this supplementary prompt may not offer any extra advantage for answering the question. In fact, it might even bias the model away from producing an optimal response to the initial prompt, resulting in a decrease in performance, the researchers write.

Self-correction is also prevalent in multi-agent LLM applications. In these scenarios, multiple instances of an LLM, such as ChatGPT, are given different instructions to perform distinct roles in a multi-sided debate. For instance, one agent might be tasked with generating code, while another is instructed to review the code for errors.

In these applications, self-correction is implemented by instructing agents to critique each others responses. However, the researchers found that this multi-agent critique does not lead to any form of improvement through debate. Instead, it results in a form of self-consistency, where the different agents generate multiple responses and then engage in a form of majority voting to select an answer.

Rather than labeling the multi-agent debate as a form of debate or critique, it is more appropriate to perceive it as a means to achieve consistency across multiple model generations, the researchers write.

While self-correction may not enhance reasoning, the researchers found that it can be effective in tasks such as modifying the style of the LLMs output or making the response safer. They refer to these tasks as post-hoc prompting, where the prompting is applied after the responses have been generated. They write, Scenarios in which self-correction enhances model responses occur when it can provide valuable instruction or feedback that pre-hoc prompting cannot.

Another key finding of the paper is that the improvement attributed to self-correction in certain tasks may be due to an inadequately crafted initial instruction that is outperformed by a carefully constructed feedback prompt. In such cases, incorporating the feedback into the initial instruction, referred to as the pre-hoc prompt, can yield better results and reduce inference costs. The researchers state, It is meaningless to employ a well-crafted post-hoc prompt to guide the model in self-correcting a response generated through a poorly constructed pre-hoc prompt. For a fair comparison, equal effort should be invested in both pre-hoc and post-hoc prompting.

The researchers conclude by urging the community to approach the concept of self-correction with skepticism and to apply it judiciously.

It is imperative for researchers and practitioners to approach the concept of self-correction with a discerning perspective, acknowledging its potential and recognizing its boundaries, the researchers write. By doing so, we can better equip this technique to address the limitations of LLMs, steering their evolution towards enhanced accuracy and reliability.

Read more from the original source:
LLMs cant self-correct in reasoning tasks, DeepMind study finds - TechTalks

Read More..

DeepMind cofounder is tired of ‘knee-jerk bad takes’ about AI – VentureBeat

VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More

For the past month, Mustafa Suleyman has been making the rounds of promoting his recent book The Coming Wave: Technology, Power, and the Twenty-first Centurys Greatest Dilemma.

Suleyman, the DeepMind cofounder who is now cofounder and CEO of Inflection AI (which set off fireworks in June for its $1.3 billion funding), may reasonably be all-talked-out after a slew of interviews about his warnings about unprecedented AI risks and how they can be contained. Still, he recently answered a batch of questions from VentureBeat about everything from what he really worries about when it comes to AI and his favorite AI tools. Notably, he criticized what he considers knee-jerk bad takes around AI and the hyperventilating press release vibe of AI Twitter/X.

This interview has been edited and condensed for clarity.

VentureBeat: You talk a great deal about potential AI risks, including those that could be catastrophic. But what are the silliest scenarios that youve heard people come up with around AI risks? Ones that you just dont think are concerning or that are just bogus or unlikely?

AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.

Mustafa Suleyman: AI is genuinely transformative, a historic technology that is moving so fast, one with such wide-ranging implications that it naturally breeds a certain level of speculation, especially in some of the darker scenarios around superintelligence. As soon as you start talking in those terms you are getting into some inherently extreme and uncertain areas. While I dont think these are the most pressing worries, and they can be way over the top, Id hesitate about calling anyone silly when so much is so unknown. Some of these risks might be distant, they might be small, maybe even unlikely, but its better to treat powerful and still only partially understood technologies with a degree of precaution than dismiss its risks outright. My approach is to be careful about buying into any narratives about AI, but also to constantly keep an open mind.

VentureBeat: On the flip side, what is the biggest AI risk that you think people underestimate? And why?

Suleyman: Plenty of people are thinking about those far out risks you mentioned above, and plenty are addressing present day harms like algorithmic bias. Whats missing is a whole middle layer of risk coming over the next few years. Everyone has missed this, and yet its absolutely critical. Think of it like this. AI is probably the greatest force amplifier in history. It will help anyone and everyone achieve their goals. For the most part this will be great; whether you are launching a business or just trying to get on top of your inbox, doing so will be much, much easier. The downside is that this extends to bad actors Because AI will proliferate everywhere, they too will be empowered, able to achieve whatever they want. It doesnt take too much imagination to see how that could go wrong. Stopping this happening, containing AI, is one of the major challenges of the technology.

VentureBeat: Do you think if you didnt live in Palo Alto, in the midst of so many in Silicon Valley concerned about the same things, that you would be just as worried about AI risks as you are now?

Suleyman: Yes, absolutely. I was worrying about these things in London nearly 15 years ago when they were at best fringe topics for a small group of academics!

VentureBeat: You famously co-founded DeepMind in 2010. What were your thoughts back then about the risks of AI as well as the exciting possibilities?

Suleyman: For me the risks and the opportunities have always existed side by side, right from the start of my work in AI. Seeing one aspect without seeing the other means having a flawed perspective. Understanding technology means grappling with its contradictory impacts. Throughout history, technologies have always come with positives and negatives and its narrow and myopic just to emphasize one or the other. Although in aggregate I think they have been a net positive for humanity, there were always downsides, from job losses in the wake of the industrial revolution to the wars of religion in the wake of the press. Technologies are toolsandweapons. Weve probably got a lot better, as a society, of thinking about those downsides over the last ten years or so. Technology is no longer seen as this automatic path to a bright, shiny future, and thats right. The flipside of that is we might be losing sight of the benefits, focusing so much on those harms that we miss how much this could help us. Overall Im a huge believer in being cautious and prioritizing safety, and hence welcome a more rounded, critical view. But its definitely vital to keep both in mind.

VentureBeat: There has been seemingly endless hype around generative AI since ChatGPT launched in November 2022. If there is one hype-y concept that you would be happy never to hear again, what would it be?

Suleyman: I wont miss a lot of the knee-jerk bad takes around AI. One of the downsides from all the hype is that people then assume it isonlyhype, that theres no substance underneath. Spend all day on Twitter/X and the world looks like a hyperventilating press release. The endless froth obscures whats actually happening, however actually significant. Once we get over the hype phase I think the true revolutionary character of this technology will be more apparent, not less.

VentureBeat: Were all captivated by the conversations happening on Capitol Hill around AI. What is it really like to discuss these topics with lawmakers? Who do you find the most well-informed? How do you bridge the gap between policy makers and tech folks?

Suleyman: Over time its become much, much easier. Whereas a few years ago getting lawmakers to take this seriously was a tall order, now they are moving fast to get involved. Its become so apparent to them, like everyone else, this is happening, AI is inevitable, its moving fast and there are yawning regulating gaps. In DC and elsewhere there is a real appetite for learning about AI, for getting stuck in and trying to make it work. So in general the regulatory conversation is far more advanced than it has ever been in the past. The gap always comes because of the mismatch in timescales. AI is improving at a rate never seen before with any previous technology. Models today are nine orders of magnitude bigger than those of a decade ago thats beyond even Moores Law. Politics necessarily grinds away at the same old pace, subject as always to the broken incentives of the media cycle. Its impossible for legislation in generally slow moving institutions to keep up, and to date no one has managed to effectively get round this. Im hugely interested in ways or institutions that might bridge this. Watch this space!

VentureBeat: Besides Pi, what is your favorite AI tool right now? Do you use any of the image generators?

Suleyman: I use pretty much all the popular AI tools out there, not least for research What I would highlight are not necessarily individual consumer products, but the AI you dont see, the way AI is embedding itself everywhere: in scanning medical images, routing power more efficiently in data centers and on grids, in organizing warehouses and myriad other uses that work under the hood. AI is about more than just image generators and chatbots, as extraordinary as they can be.

VentureBeat: You talk about the Coming Wave, but have you ever been surfing?

Suleyman: I have! Not that I would claim to be any good Im more of a metaphorical surfer!

VentureBeat: You have been active in AI policy for years and obviously spend a great deal of time thinking about how companies and governments can ride the Coming Wave. But obviously for all of us it comes with some anxiety. What are your personal strategies for handling AI or tech-related stress and anxiety regarding the future?

Suleyman: Its a really good question, and an important point. It can seem completely overwhelming, paralyzing even. There are two things Id say to someone here. The first is that although AI may cause problems, it will also help solve a whole load of them as well. Climate change, stalling life expectancy, slowing economic growth, the pressures of a demographic slowdown The 21st century has its fair share of epochal challenges, and we need new tools to meet them. I would never say AI alone can do this. It is only as effective as its context and use, but I also think meeting them without something like AI is much, much harder. Again, lets remember both sides here, the worries but also the benefits.

Secondly, too many people are inclined to what I call pessimism aversion, the dominant reaction of elites to scenarios like AI. They take the downsides on board, but then quickly ignore them, look away from where it might lead and carry on as if everything is fine. Its not doomerism, but a kind of willful ignorance or dream world. This is a terrible foundation for the future! We do need to confront hard questions. Anxiety might be an important signal here. The only way we make all this work is by following the implications wherever they lead. Its not an easy place to be, but better to see clearly and have a chance of making a difference than look the other way. I find the best cure to that is working to actively build contained technology and not standing on the sidelines.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

More:
DeepMind cofounder is tired of 'knee-jerk bad takes' about AI - VentureBeat

Read More..

Deep Diving Into the Mind: How Gratitude Helped Me Find Beauty – Her Campus

This article is written by a student writer from the Her Campus at MSU chapter.

What ensues when life is no longer perfect? The truth about humanity is that no single being can be defined as simply perfect. This is a realization that I have come to terms with after intentionally practicing the art of identifying imperfection, and celebrating it when it seems absurd. It is too easy to catch myself in times when I feel like I am not enough, or when life starts to feel like the marathon I am constantly told it could be. A fizzled friendship, a dampened connection, maybe even poor health or burdening regrets. These are all things that seemingly stack into my mind, diffusing the joy that something just could have been. However, that draining search for what could be is exactly what I could not see.

My path towards gratitude started with an entire dose of skepticism. I like to think that I am logical, that I reason through things, while having an intuition that certain things just are not for me. Hence, starting to identify the issue with my apprehension was the biggest learning curve I faced. I had to sit down,be honest with myself, and truly understand that something different does not necessarily equate to an experience being invalid. I feel as though sometimes this is a common thought-process. A lot of my interactions with others have continuously exemplified that leaving the warmer confines of comfort and knowing, takes a different kind of courage from the individual trying. With patience, I forced myself to finally start, right during my freshman year of college. My first mode of appreciation awareness was a bullet journal, in which I wrote every single morning; 10 things that I was grateful for. When I wrote these, my mind was not always present, and so I gave myself the aim to start with a broader scope; perhaps I appreciated the new sweater my sister bought me, or the grade I got on that one chemistry exam last week. However, this quickly started to feel very trivial, almost like a receipt for my materialistic earnings or possessions. I lacked the total feeling of satisfaction that others used to talk about when they documented their own journeys towards identifying gratitude. I came to the understanding that I needed to scale down and become a little bit more aware of what surrounds me. I am so thankful for this understanding, because it took me a while to grasp the idea that gratitude is more than a list of what I appreciate. It is a collection of the tiny details that make me who I am, that give bigger ideas their charm or character. Most importantly, it is a magnifying glass that gives me a wholehearted perspective; things are never as shallow as they seem. So I literally needed to zoom in.

Delivery of Thought

With this new comprehension, I continue to implement my focused understanding of feeling gratified into my current practice. I still do write my 10 things I am grateful for every morning, but with clarity. For example, I appreciate the fact that I slept really well last night, am in good health, and have genuine friendships that revitalize me. There are times when I feel overwhelmed by something, and use this practice to reprogram my mind into seeing a positive corner of the very thing that is draining my energy. It could be a test that I studied so hard for, although not reflected in my final score, taught me more about how my own mind works with difficult subjects. Now I carry that new knowledge with me, moving forwards towards the next opportunity. In this sense, my practice of gratitude has become a reflection of my observations as well. With the risk of sounding sentimental, I truly do appreciate good weather, and the stars on a clear night sky, even the feeling of silence during the early hours of a muted sunrise. These all would have been easily overlooked as natural occurrences in my past. Simply things that just always happen, when in reality they are so much more to me, and impact the way I carry myself through a day, or a week.

Implementation

However, gratitude is not a one-size-fits-all type of remedy for all individuals who may resonate with the feelings I have felt and sometimes do continue to feel. The driving force behind an effective practice was finding something that worked for me. I can dedicate five minutes to writing in a journal every morning, or I can simply remind myself to be present and appreciate the details around me when driving to work or school. As is true with most other practices, coming to this place took trial and error. Another aspect of this whole application that framed it more comprehensively for me was to channel some of the same observational gratitude back to myself, in order to utilize that positive energy for my own uplifting. This could be by being proud of my perseverance despite a poor outcome, or celebrating the fact that I even tried something new at all when I had numerous other options to stay within my own comfort zone. This smaller exercise allows me to freely celebrate my own successes, on a level that feels more authentic to me, and one that is mine alone.

This perspective of finding gratitude, no matter how minute, is labeled by understanding, compassion, and handfuls of patience, forever changing the way I see all that lies around me. I am more mindful, able to notice and appreciate more than just physical things around me, and at the very least, I finally know what Drake meant when he sang, Im way up, I feel blessed.

Sources

https://genius.com/Drake-over-lyrics

Go here to see the original:
Deep Diving Into the Mind: How Gratitude Helped Me Find Beauty - Her Campus

Read More..

Machines will read our minds – Maclean’s

AI brain sensors will translate our thoughts into speech, text or even other languages

Yalda Mohsenzadeh is a professor of computer science at Western University.

(This illustration was created by Macleans art director Anna Minzhulina using the generative AI image program Imagine. Minzhulina spent weeks feeding prompts into the program, inspired by the essay.)

The brain has always been the source of inspiration for artificial intelligence scientists, with billions of neurons that work together to enable us to think, see, hear and remember. Soon, AI will be able to do that tooby decoding the patterns of the mind.

Take, for example, the case of Ann Johnson, a Saskatchewan woman who had a brain-stem stroke at 30 years old, leaving her unable to speak. This year, as part of a clinical trial in California, she had more than 200 electrodes placed inside her head, in an area of the brain that produces speech. A port connected to a computer allowed an AI algorithm that uses a variety of deep-learning techniques to interpret her neural activity. From there, it produced speechAnn was able to communicate clearly with her husband through an avatar that spoke as she was thinking. We knew the AI was correctly reading her thoughts because researchers tested its ability to replicate controlled information. They had a dataset of sentences that contained a vast range of sounds. They showed Ann these sentences and got her to repeat them over and over in her mind in order to train the AI algorithm to recognize which brain signal corresponded to which sound.

MORE: The future of AIand Canadas place in it

After training the AI algorithm, the scientists tested it in real time. She thought it, and the avatar said it. Currently, this AI can process about 78 words a minute. Its capable of more than a few simple words. It has 39 distinctive sounds that are used to form whatever words and sentences Ann wants.

Is this something we can roll out to all patients who cant speak? Not yet. While all people have some commonality in terms of brain function and information processing, much brain activity is unique to each person, and it varies throughout the day. The other limitation of this work is that it has to be done in a controlled environment because the device is implanted directly into the head of the patient.

In our lab, we show individuals videos or images while recording their brain activity, using wearable sensors on their scalps that are sensitive to tiny changes in electrical fields. We then use AI techniques to decode what video or image they see. Essentially, were asking, what are the dynamics of the brain processes that give rise to visual cognition? Weve found we can successfully determine what the person was looking at and thus identify intricate neural dynamics and brain processes that create our meaningful perception of the visual world.Yes, the data is noisier than what you get when you attach sensors directly to the brain. But as this technology develops, it brings us closer to understanding and translating what the brain is doing. That means people with severe paralysis, stroke damage or other conditions that affect their ability to talk may soon have a means to do so.

What everyone wants to know, of course, is whether AI might be able to read our mindscould we control our computers with just a thought? I do not believe this is science fiction. Its not something that will only happen in 100 years; it could very likely happen in the next decade. But first, we need two key developments: better sensors to capture signals from the brain, and an improvement in AI techniques that can read brain signals and decode information.

Once we have those, the applications will not only be medical, but commercial as well. For example, right now, if we want to Google something, we have to type it into our mobile phone or our laptop, or ask an AI assistant to find it for us. It would be amazing if you could think of a question and then, with a wireless device, transmit that question to the cloud, where AI would search for the answer and send it right back to your brain.

MORE: Personalized, preventive medicine is on its way

The field of AI and deep learning is evolving fast. New algorithms, methods and techniques are appearing all the time. One day, we might be able to translate automatically and respond to someone in their own language, or control a vehicle with just our thoughts. Of course, all of this is still theoretical. It will require the blending of sensors and the AI algorithms that already do language translation or drive autonomous cars. But it shows the exciting horizons this technology could bring.

There are challenges to consider with this type of research. For example, reading brain impulses could also help companies develop targeted advertising. And what would the companies that read our minds do with that information? Wed need to ensure privacy, data security and consent. Its similar to the ethical considerations we have with social media today. We dont want the wrong people reading our minds.

We reached out to Canadas top AI thinkers in fields like ethics, health and computer science and asked them to predict where AI will take us in the coming years, for better or worse. The results may sound like science fictionbut theyre coming at you sooner than you think. To stay ahead of it all, read the other essays that make up ourAI cover story, which was published in the November 2023 issue ofMacleans.

Continue reading here:
Machines will read our minds - Maclean's

Read More..

trooVRS & Digital Mind State Present: The First-Ever Mixtape Release Party in the Metaverse – Yahoo Finance

Celebrating 50 Years of Hip Hop with "Web 3: The Algorithm Is Real" Mixtape, Featuring West Coast Legend Ras Kass

LOS ANGELES, Oct. 11, 2023 (GLOBE NEWSWIRE) -- Today, the worlds of music and technology collide as West Coast rap luminary Ras Kass reveals his trailblazing track, "Avatar Gangster," from the "Web 3: The Algorithm Is Real" free mixtape. This visionary endeavor, encompassing 18 compelling tracks, gains an extra layer of allure with the participation of the renowned international cover model, influencer, and host Myla Tkachenko.

This mixtape will debut in the metaverse and be broadcast on FUBU RADIO this October. As hip-hop celebrates its half-century milestone, it prompts us to ponder its evolution in the next decade. What will the future incarnation of Hip Hop offer? Enhanced by Tech With Soul, attendees can delve deep into the metaverse, joining prominent artists like Money B of Digital Underground, Ras Kass, B-Legit, and Vin Rock of Naughty By Nature. They will delve into the future of hip-hop and technology, exploring the transformative mixtape experience.

Moreover, a distinctive panel discussion, steered by Mike Johns, will spotlight the confluence of technology and music, the rise of hip-hop in the metaverse, and how artists can harness web3 technologies to foster fan engagement and monetize their tunes.

Ras Kass shares his enthusiasm: "As an ardent metaverse advocate, I am eager to immerse myself, engage with my fanbase, and expand it. The metaverse is about to witness an unprecedented surge in vibrancy. Be a part of this journey with me!" Fans also get the exclusive chance to interact directly with Ras Kass and his virtual avatar in a unique meet-and-greet session.

Under the expert supervision of Captain KG, DJ William B. IV, and Mike Johns, overseeing executive production, the mixtape's debut on trooVRS is slated for Monday, October 16, between 9 - 10 pm EDT. For those who miss it, a re-airing awaits on the Tech With Soul/trooVRS YouTube channel. A lucky hundred fans can also snag free tickets via Eventbrite, stepping into a web-based and desktop-only virtual experience set within the meticulously designed trooVRS stash houseabsolutely no downloads or VR devices required.

Story continues

Guided by the vision of Digital Mind State, the "Web 3: The Algorithm Is Real" mixtape is teeming with exclusive offerings ranging from music, art, and videos to unique merchandise.

Mike Johns, the visionary behind Digital Mind State, remarked, "Collaborating with trooVRS to meld the metaverse with hip-hop in a milieu that empowers artist scalability and monetization is invigorating. trooVRS truly resonates with and understands cultural nuances."

Adrian Whant, Founder, and CEO of trooVRS, echoed this sentiment, stating, "Our collaboration with Digital Mind State is a monumental stride. Showcasing an iconic event like the 50-year hip-hop celebration in an aesthetically captivating and expansive manner is genuinely unparalleled."

Boilerplate:

About TrooVRS:Founded in 2022, trooVRS is an immersive browser-based media platform. trooVRS offers users engaging men's lifestyle content across virtual editorial environments, I-commerce, and like-minded communities. Driven by solution-oriented VR storytelling, trooVRS prioritizes practical knowledge, global creators' voices, and impactful contentthe platform pioneers virtual worlds in the metaverse, curating immersive brand-consumer experiences. To learn more about trooVRS, please visit: https://troovrs.io/

About Digital Mind State:Based in Los Angeles, CA, Digital Mind State disrupts the entertainment industry as a multidimensional creative agency. With a unique blend of creativity and innovation, they craft solutions for entertainment and lifestyle brands, connecting people, brands, and culture in a Web 3.0 world. Learn more at https://digitalmindstate.com

About Ras Kass:Rapper Ras Kass was born John Austin IV in Watts, California. A voracious reader throughout his youth, he adopted his stage name in honor of Ethiopian emperor Ras Kassa Mercha. Ras Kass remains one of the most highly respected rappers today, winning acclaim for his skills as a lyricist. Kass has worked with the whos who of Hip Hop, including WuTang Clan, Dr. Dre, Kendrick Lamar, Kanye West, 2Chainz, DJ Premier, and Talib Kweli, to name a few.

Contact: Lisa Cavalli, lisa@troovrs.io

Contact: Andrea Brown, Andrea@digitalmindstate.com

Continued here:
trooVRS & Digital Mind State Present: The First-Ever Mixtape Release Party in the Metaverse - Yahoo Finance

Read More..

Darren Aronofsky’s ‘Postcard From Earth’ opens at the Sphere – Los Angeles Times

Las Vegas

We were somewhere above the Earth, on the edge of the exosphere, when the gummies began to take hold. Or maybe it was the Mai Tais, acquired during an extended happy hour at the Golden Tiki, the sort of joint where the bathrooms have naughty wallpaper and you can pose for photos on a giant plastic clam shell. I remember feeling lightheaded as we plunged through clouds, soared over mountains and then sunk to the depths of the ocean, where I saw huge jellyfish swooping and diving around us. A voice shouted, Oh, my God! Oh, wow! The world trembled and shook. Or perhaps that was just my seat.

Director Darren Aronofksys dreamy film Postcard From Earth landed at the Sphere in Las Vegas on Friday night, a week after U2 inaugurated the orb-shaped arena with a graphics-saturated concert inspired by their Achtung Baby album and it was an event. About 5,000 people, including Aronofsky, were in attendance for the premiere. In his opening remarks, the filmmaker likened the technological process of making this unusual picture to building the plane while flying it.

Thats because Postcard From Earth is no ordinary movie. The interiors of the spherical arena are lined with a curving 270-degree screen that is about the size of four football fields and covers much of the walls and the ceiling. To create a film for this surface required a specially fabricated camera prototype dubbed Big Sky that could capture images in high enough resolution (18K) to broadcast them at the Spheres massive scale: The building reaches a height of 366 feet between 30 and 35 stories tall depending on how you measure it.

To see a film in the Sphere, therefore, isnt so much to watch it as it is to simply let it wash over you.

Postcard From Earth steeps you in a very loose 50-minute tale about life rising from the oceans onto the land, the birth of human civilizations and the moment, not so deep in the future, when humanity departs a despoiled Earth to settle other worlds in the process recalling our home planets magnificence.

The curving screens make it feel like a true immersion. You can observe a school of fish before you, then look up and see the sunlight piercing the water, as if you too were inhabiting the depths. The camera travels through shrines, churches and mosques where you marvel not only at the scale but at the details of dome-topping mosaics youd need binoculars to spy in person. In a scene Aronofsky himself shared on social media, a male elephant hundreds of feet tall wanders right before your nose the thunderous reverberations of each step delivered to your seat (and your lizard brain) via a haptic system.

Like so many things Vegas, its the sort of thing for which it pays to be lightly toasted.

Darren Aronofskys Postcard From Earth was filmed in locations on every continent.

(Meg Meyer / Sphere Entertainment)

As the program got started on Friday evening, an announcer hailed James Dolan, the controversial mogul who conceived the Sphere and runs both Madison Square Garden and the Spheres parent company, Sphere Entertainment Co. One man, intoned a womans voice, had a vision of the future of entertainment and he drew a circle on a legal pad.

Like all visions of the future, however, the Sphere is made up of pieces of the past.

There is, for starters, its form and its purpose. The Sphere bears a resemblance to the pavilion that Charles Eames and Eero Saarinen designed for IBM for the New York Worlds Fair in 1964. The pavilion consisted of a series of exhibitions including a puppet-style show about data processing nested underneath a 90-foot tall ovoid theater known as the Information Machine. Audiences were inserted into the theater via a pneumatic grandstand, after which they were treated to films about mans relationship to technology on a series of nine geometric screens adhered to the theaters sloping walls.

The Information Machine, read an ad from the era, puts you inside the mind of a racing car driver at 120 miles an hour and explores the mysteries of a womans mind as she plans the seating of a dinner party. A terrific seven-minute film uploaded to YouTube by Eames Office, the organization that preserves the work of Charles and Ray Eames, shows how the whole elaborate system worked. (Left unexplained: the mysteries of the female mind.)

IBMs ovoid pavilion at the New York Worlds Fair in June 1964.

(Morse Collection / Gado / Getty Images)

Designed by Populous, an architectural firm based in Kansas City, Mo. that specializes in large venues, the Sphere incorporates other pieces of popular culture past and present.

Any visit to the Sphere a.k.a. the Sphere Experience includes time built in to wander about the multi-story atrium, whose highly reflective interiors suggest the sleek sets of late Stark Trek movies and the blue-light aesthetics of Disneys Tron films. Welcoming visitors to the space are five A.I.-powered robots named Aura, which can say your name, engage in basic conversation or offer explainers on topics such as directional audio and the buildings engineering.

In her physical aspect, Aura a mechanical robot covered, in part, with human-ish gray skin evokes the coolly manipulative robot Ava in Ex Machina played by Alicia Vikander. But in terms of personality, she comes off more like the very advanced niece of Ford Motor Co.s Hank the Robot, who puts on humorous routines at auto shows (and is actually manipulated in real time by an actor offstage).

Aura is hardly the only robot in Vegas, a town where automatons will make you a cocktail and serve you cookies and milk. During a tech convention in 2018, a local strip club featured pole-dancing robots named R2DD and TripleCPU. They did not offer lap dances.

A hologram, at top, materializes in the Spheres atrium. Below: Aura, a robot, greets guests.

(Carolina Miranda / Los Angeles Times (top), Rich Fury / Sphere Entertainment)

The real shows at the Sphere are not in the atrium but on its surfaces, both inside and out.

The building has developed a cult following for the surreal array of images it has projected on its exterior skin since it was activated in July: eyeballs, emoji and a baby in a space helmet, along with an overwhelming number of ads for the U2 shows. As I pointed out in an earlier story, Sphere is essentially a giant billboard one that not only draws crowds on the street on a nightly basis but has a massive fan following on social media. Now the inside is drawing crowds too.

The Sphere is not a comfortable venue. In order to jam nearly 18,000 seats into a ball, the designers had to insert the stands at a very steep rake; as a result, the clearance between seats is very narrow. (Pity the fool in a center seat who has to go to the bathroom during a performance.) But it makes up for the snugness with spectacle namely, the behemoth, curving screen that makes a five-story Imax feel diminutive and transforms the sight of a gecko eating a bug into something akin to a hallucination.

I have not yet been to a musical performance inside the venue, but seeing Aronofskys film made me think of the observations of Las Vegas historian Michael Green. He told me he saw the venue fitting into a tradition of theme park-driven attractions that date back to the late 80s. The Sphere, he says is a continuation of the volcano, the fountains, the canals, the pirate show.

The whole thing certainly feels very Disney.

The architecture is Disney Worlds Epcot Center on steroids and the immersive screen finds precedent in Disney attractions such as the Circarama (later the Circle-Vision 360), which over the decades has encircled audiences with majestic films about Italy, Canada and the U.S. And, of course, there is Soarin Over California at the Disney California Adventure Park, a flight motion simulator that puts viewers in suspended seats before a domed, 80-foot IMAX screen featuring projections of snow-capped peaks, orange groves and the Golden Gate Bridge. Soarin Over California features so-called 4D effects, such as breezes that blow through the venue during windy sequences as does the Sphere (in addition to the haptic capabilities in many of the seats).

The Spheres exterior projects a variety of images nightly, such as this emoji.

(Carolina A. Miranda / Los Angeles Times)

Aronofskys film is of the Disney school of immersion. You soar over tea plantations, you plunge into crystalline oceans, you are lifted off into space. Even as the story shifts, and evidence of the damage humans have wrought becomes evident, the shots are nonetheless breathtaking dour apartment blocks and garbage are rendered cinematic. This latter part is a more accessible, family-friendly version of Godfrey Reggios hypnotic 1983 documentary, Koyaanisqatsi, with its now-legendary score by Philip Glass. (Props to Aronofsky for tapping the composers of L.A. collective the Echo Society to create a score that brings an industrial richness and atonality to the experience.)

There is no small irony in watching a film about loving the planet while sitting in a venue that is incinerating spectacular amounts of electricity. The villains in Postcard From Earth namely, us are kept largely abstract. The viewer floats over slums, a strip mine, an artisanal sulfur mine. You see the poor, but not the people who made them that way. There are no bald billionaires in super yachts or reality TV stars in flotillas of private jets. Somewhere in the narrative is a vague lesson about taking care of Mother Earth.

But moments of Aronofsky weirdness nonetheless puncture the veneer. At one moment, the audience inside the Sphere comes face to face with an audience inside a classical music hall, and for a moment we behold each other in silence, as if saying, Here you are, there I am a humorous mirror effect that takes you, for a split second, outside of your body.

Or maybe it was just the gummies.

Darren Aronofsky's 'Postcard from Earth'

Where: Sphere, 255 Sands Ave., Las VegasWhen: On view indefinitelyAdmission: $49 - $169, depending on the dateInfo: thespherevegas.com

Original post:
Darren Aronofsky's 'Postcard From Earth' opens at the Sphere - Los Angeles Times

Read More..

The Afterhours: Birkenstock struggles to find its footing; IRS says … – Proactive Investors UK

About Andrew Kessel

Andrew is a financial journalist with experience covering public companies in a wide breadth of industries, including tech, medicine, cryptocurrency, mining and retail. In addition to Proactive, he has been published in a Financial Times-owned newsletter covering broker-dealer firms and in the Columbia Misourian newspaper as the lead reporter focused on higher education. He got his start with an internship at Rolling Stone magazine. Read more

Proactive financial news and online broadcast teams provide fast, accessible, informative and actionable business and finance news content to a global investment audience. All our content is produced independently by our experienced and qualified teams of news journalists.

Proactive news team spans the worlds key finance and investing hubs with bureaus and studios in London, New York, Toronto, Vancouver, Sydney and Perth.

We are experts in medium and small-cap markets, we also keep our community up to date with blue-chip companies, commodities and broader investment stories. This is content that excites and engages motivated private investors.

The team delivers news and unique insights across the market including but not confined to: biotech and pharma, mining and natural resources, battery metals, oil and gas, crypto and emerging digital and EV technologies.

Proactive has always been a forward looking and enthusiastic technology adopter.

Our human content creators are equipped with many decades of valuable expertise and experience. The team also has access to and use technologies to assist and enhance workflows.

Proactive will on occasion use automation and software tools, including generative AI. Nevertheless, all content published by Proactive is edited and authored by humans, in line with best practice in regard to content production and search engine optimisation.

Continued here:
The Afterhours: Birkenstock struggles to find its footing; IRS says ... - Proactive Investors UK

Read More..

Is trying to watermark AI images a losing battle? – Emerging Tech Brew

With a flood of sophisticated AI imagery loosening the internets already-shaky grip on media reality, one of the most-discussed possible fixes is the tried-and-true watermark.

Concerned parties from Google to the White House have floated the idea of embedding signifiers in AI-generated images, whether perceptible to the human eye or not, as a way to differentiate them from unaltered photos and art.

But a new preprint paper from researchers at the University of Maryland casts some doubt on that endeavor. The team tested how easy it was to fool various watermarking techniques as well as how the introduction of false positivesreal images incorrectly watermarked as AI-generatedcould muddy the waters.

After laying out the many ways watermarking can fail, the paper concluded that based on our results, designing a robust watermark is a challenging, but not necessarily impossible task. In a conversation with Tech Brew, however, co-author Soheil Feizi, an associate professor of computer science at the University of Maryland, was perhaps even more pessimistic.

I dont believe that by just looking at an image we will be able to tell if this is AI-generated or real, especially with the current advances that we have in pixel image models, Feizi said. So this problem becomes increasingly more difficult.

Conclusions like these, however, dont seem to have put much of a damper on enthusiasm around watermarking as a guardrail in the tech and policy world.

At a meeting at the White House in July, top companies in the space like OpenAI, Anthropic, Google, and Meta committed to developing ways for users to tell when content is AI-generated such as a watermarking system, though academics at the time were skeptical of the agreements as a whole.

More recently, Google Deepmind unveiled a watermarking system called SynthID that embeds a watermark that is imperceptible to the human eye into the pixels of an image as its being created.

Feizi and his co-authors classified systems like SynthID as low-perturbation techniques, or those where the watermark is invisible to the naked eye. The problem with these methods, according to the paper, is that if the images are subjected to a certain type of tampering, there is a fundamental trade-off between the number of AI-generated images that can slip through undetected and the portion of real images falsely tagged as AI creations.

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

This tampering, used in the experiment detailed in the University of Maryland paper, involves something the authors call a diffusion purification attack, which basically involves adding noise to an image and then denoising it with an AI diffusion modelthe same type of tech at play in most modern image-generation systems.

What we show theoretically, is that [low-perturbation techniques] will never be able to become a reliable approach, Feizi said. Basically, results suggest that reliable methods wont be possible even in the future.

Then there are the actually visible watermarksor high-perturbation techniques, in the technical parlance. Getting in the mindset of potential bad actors, Feizi and his team designed attacks that could successfully remove those watermarks.

They were also able to realistically fake this type of watermark, meaning nefarious parties could pass off real, potentially obscene images as the product of an image-generation model, something that could damage the reputation of the developers, for example, the paper pointed out.

If viewers cant trust that watermarks correspond with whether or not an image is actually AI-generated, it becomes effectively useless, Feizi said. Some people argue that even though watermarking is not reliable, it provides some information, Feizi said. But the argument here is that it actually provides zero information.

This paper isnt the first to question the feasibility of watermarking systems. Another recent paper from UC Santa Barbara and Carnegie Mellon University demonstrated that imperceptible watermarks can be removed.

Feizi also co-authored a paper in June that argued watermarking AI-generated text, a method that was being discussed by companies like OpenAI at the time as a way to distinguish language generated by programs like ChatGPT with subtle patterns, is not reliable in practical scenarios.

But the development of effective guardrails is taking on more urgency as the US barrels toward a presidential election that experts worry could be plagued by all sorts of new AI-generated misinformation.

It is a bit scary to think of the implications, Feizi said. Weve got to make sure that we educate people to be skeptical of the contenteither text or images or videos that may be releasedin terms of their authenticity.

Continued here:
Is trying to watermark AI images a losing battle? - Emerging Tech Brew

Read More..

Starmus announces Jean-Michel Jarre and The Offspring as new … – Astronomy Magazine

BRATISLAVA October 11, 2023 The Offspring and Jean-Michel Jarre will headline the seventh edition of the Starmus Festival focused on the future of our home planet. They will be joined by Tony Hadley, former lead singer of the British pop icon from the 80s Spandau Ballet. Brainchild of Garik Israelian and Queen guitarist Sir Brian May, the festival of science communication will be brought to global audiences thanks to a partnership with cybersecurity giant ESET on May, 1217, 2024 in Bratislava, Slovakia. The event promises to deliver an extraordinary lineup of world-class speakers, discussions and music performances. Throughout the multi-day event, festival goers will have the opportunity to enjoy numerous talks, delving into diverse topics such as astrophysics, computer science, neuroscience, microbiology, and biochemistry.

The festival will also award the Stephen Hawking Medal for Science Communication across four categories: Music & Arts, Science Writing, Films & Entertainment, and Lifetime Achievement, with a live performance by Sir Brian May.

The astronaut and engineer Chris Hadfield, the scientist and leading voice in AI Gary Marcus; the Cambridge DeepMind Professor of Machine Learning Neil Lawrence; the SLAM Oxford Professor Philip Torr; the computer scientist and co-founder and CEO of Plumerai Roeland Nusselder; the popular British multi-talented comedian, broadcaster and author Robin Ince; the Ukranian climate scientist Svitlana Krakovska and the legendary oceanographer and chairman of Mission Blue/Sylvia Earle Alliance Dr. Sylvia Earle are the latest talents joining Starmus VII unique panel of speakers (access full list here).

The Astrophotography School, organized by former senior editor of Astronomy Magazine, Michael E. Bakich, is a traditional side-event of the Starmus festival. Led by three of the worlds best astroimagers: Damian A. Peach, Chris Schur and Martin Ratcliffe, the 2024 edition will offer a unique occasion for astrophotography enthusiasts to take pictures of celestial objects and enjoy a once in a lifetime experience.

At its core, the Starmus festival embodies ESETs unwavering dedication to safeguarding the progress that technology enables. With over 30 years of experience in cyber-threats and digital security, ESET has firmly established itself as a research-first company. At home, ESET demonstrates its dedication to science through the ESET Science Award, an annual celebration that recognizes outstanding achievements in Slovak science. Joining forces with the Starmus Festival therefore represents the companys natural ambition to promote the power of science among local and global audiences.

We are thrilled to partner with Starmus, a celebration where science and music harmonize to inspire innovation and curiosity. ESET stands at the intersection of research and security, working to joining them and ensure that the digital landscape remains both trustworthy and protected, said Richard Marko, CEO at ESET. In our ever-evolving digital landscape, it is crucial to not only protect technology but also to foster a deep appreciation for the scientific achievements that drive progress. This festival serves as a platform to celebrate these accomplishments and inspire future leaders in both fields.

Garik Israelian, astrophysicist and Starmus founding director, explained, Science is a pathway to curiosity, a bridge to understanding, and a beacon of endless possibilities. Embracing science means embracing the futureit nurtures critical thinking and fuels innovation. In a rapidly evolving world, scientific knowledge empowers us to make informed decisions, question the unknown, and shape a brighter tomorrow. With the Starmus Festival, we aim to inspire people to explore, discover, and believe in the extraordinary potential that lies within the realms of science.

About ESET

For more than three decades, ESET has been providing innovative, state-of-the-art digital security for millions of businesses, consumers, and critical infrastructure. A proven pioneer in heuristics detection, machine learning and AI algorithms, ESET offers unmatched prevention-first cybersecurity solutions powered by renowned global Threat Intelligence, and an extensive R&D network led by industry-acclaimed researchers.

To stay ahead of emerging cyber threats, ESETs high-performing, easy-to-use solutions unobtrusively protect and monitor 24/7 not just to stop attacks in their tracks, but to prevent them from happening in the first place. For more information, visit http://www.eset.com or follow us on LinkedIn, X, Facebook, Instagram, YouTube and TikTok.

About Starmus

Since the very first homo sapiens looked up at a star-filled sky, we have been awestruck by the vastness of the cosmos. Even today, we remain humbled by the sheer immensity of space, especially as progress in physics and astronomy has made us aware of the tremendous distances involved even to our closest neighboring stars.

Created by Garik Israelian PhD, astrophysicist at the Institute of Astrophysics of the Canary Islands (IAC) and Sir Brian May PhD, astrophysicist and the lead guitarist of the iconic rock band Queen, Starmus is a festival of science, art and music that has featured presentations from astronauts, cosmonauts, Nobel Prize winners and prominent figures from various scientific disciplines and musical backgrounds. Starmus brings Nobel laureates, eminent researchers, astronauts, thinkers and artists together to share their knowledge and experiences, as we search for answers to the great questions.

Stephen Hawking Medal for Science Communication

Stephen Hawking and Alexei Leonov, together with Brian May, worked to create the Stephen Hawking Medal for Science Communication in 2015, awarded to individuals and teams who have made significant contributions to science communication. Previous Stephen Hawking Medal winners include Dr. Jane Goodall, Elon Musk, Neil deGrasse Tyson, Brian Eno, Hans Zimmer, and the Apollo 11 documentary.

http://www.starmus.com

See the rest here:
Starmus announces Jean-Michel Jarre and The Offspring as new ... - Astronomy Magazine

Read More..

2023 Celebration of Science set for Oct. 19-20 | News | School TWU – Texas Woman’s University

Oct. 11, 2023 DENTON The annual Celebration of Science will be held Oct. 19-20 in the Scientific Research Commons and theAnn Stuart Science Complex on the Texas Woman's University campus.

The Celebration of Science honors the four divisions of TWU's School of Sciences: biology, chemistry and biochemistry, computer sciences, and mathematics.

This year's celebration begins with the alumni reception on Thursday evening, Oct. 19, on the first floor of the SRC. Registration for this event is required.Click here to register.

Friday's speakers include three TWU alumna and three current TWU professors. There will also be student discussions with Emily Arellano from mathematics, Bethany Lazarus from informatics, Kiran Hina Tajuddin from biology, and Derek Aguilar from chemistry.

This year's Celebration of Science is in honor of Richard Sheardy, PhD, of the TWU division of chemistry and biochemistry as a tribute to his distinguished career and long time support of women in science.

On Friday, Oct. 20, the celebration moves to the ASSC. After a continental breakfast, speakers will begin making their presentations. Among the topics:

Friday's events are free and open to the public.

For more details, visit the Celebration of Science website.

Read more here:

2023 Celebration of Science set for Oct. 19-20 | News | School TWU - Texas Woman's University

Read More..