Page 632«..1020..631632633634..640650..»

Analytics/AI conference brings new perspectives to businesses and … – University at Buffalo

BUFFALO, N.Y. The University at Buffalo School of Management hosted the inaugural Eastern Great Lakes Analytics Conference on Nov. 3 and 4, marking the first formal gathering of industry experts and academic researchers in Western New York dedicated to exploring the frontiers of data analytics and artificial intelligence.

Organizers welcomed 130 attendees from more than 40 organizations, creating a dynamic platform for exchanging insights and shaping the future of these transformative technologies.

The first day of the event featured a prestigious lineup of industry speakers, including executives from M&T Bank, National Fuel, Hidden Layer and Lockheed Martin, who shared their expertise on real-world applications of data analytics and AI.

To complement the industry perspectives from day one, researchers from such renowned institutions as Cornell University, University of Rochester, University of Pittsburgh and the University of Toronto showcased their cutting-edge advancements in data analytics and AI on day two, sparking engaging discussions on the implications of these innovations for businesses.

The blend of academic and industry participation fostered a stimulating environment, enabling attendees to delve into the latest advancements in data analytics and AI while exploring their practical applications, says Sanjukta Smith, chair and associate professor of management science and systems in the UB School of Management. This synergy generated invaluable insights into the evolving landscape of these technologies and their potential to revolutionize business operations and strategic decision-making.

Smith co-chaired the conference with Kyle Hunt, assistant professor, and Dominic Sellitto, clinical assistant professor, both in the UB School of Managements Management Science and Systems department.

The success of the Eastern Great Lakes Analytics Conference underscores our commitment to fostering innovation and collaboration in the field of data analytics and AI, says Ananth Iyer, dean of the UB School of Management. As the regions premier business school, the School of Management is poised to continue leading the way in shaping the future of these transformative technologies and empowering businesses to harness their full potential.

Now in its 100th year, the UB School of Management is recognized for its emphasis on real-world learning, community and impact, and the global perspective of its faculty, students and alumni. The school also has been ranked by Bloomberg Businessweek, Forbes and U.S. News & World Report for the quality of its programs and the return on investment it provides its graduates. For more information about the UB School of Management, visit management.buffalo.edu.

See original here:

Analytics/AI conference brings new perspectives to businesses and ... - University at Buffalo

Read More..

Jobs of 2030; skills to develop for the future in an increasingly competitive world – The Financial Express

By Sonya Hooja

It is true that the job market is ever-evolving. However, a child born in 2010 will likely face a vastly different reality than a young student stepping out of college the same year just 20 years apart. With the smartphone and internet revolution, artificial intelligence and data science having grown leaps and bounds between 2010 and 2030, fresh graduates are now facing more and more uncertainty in the job landscape.

The job market is rapidly evolving, driven by technological advancements, changing demographics, and global economic shifts. Jobs that exist today may become obsolete, while new opportunities will emerge. To thrive in this increasingly competitive world, individuals need to develop diverse skills that align with the demands of the future job market.

According to Googles Skills of the Future report, the jobs of the future will place a premium on adaptability, digital literacy, and problem-solving abilities. The report highlights the importance of soft skills such as critical thinking, creativity, and emotional intelligence. In addition, it emphasizes the need for continuous learning and upskilling throughout ones career. The ability to navigate through an ever-changing technological landscape will be crucial, as new technologies emerge and replace traditional job roles.

India is experiencing a significant shift in its job market, with emerging sectors offering exciting opportunities. Some of these sectors include:

Data science and analytics: With the increasing reliance on data-driven decision-making, professionals skilled in data science, machine learning, and data analytics will be in high demand. The ability to derive meaningful insights from vast amounts of data will be crucial for organizations across industries.

AI and Machine Learning: To succeed in the job market of 2030, individuals must continuously learn, adapt, and embrace emerging trends in Artificial Intelligence (AI) and Machine Learning (ML). As per a WEF report from 2023, 85% of companies are looking to maximize AI usage in the next 5 years predicting a rise in hiring of students of AI. In fact, AI and Machine Learning Specialists top the list of fast-growing jobs.

Cybersecurity: The growing digitization of businesses and the increasing threat of cyber-attacks, will have cybersecurity professionals play a vital role in safeguarding digital assets. Individuals with skills in cybersecurity, ethical hacking, and risk management will be in high demand.

Fintech: Technical expertise, financial knowledge, analytical skills and problem-solving skills are the most in-demand skills in the fintech sector. As the world of finance takes to digital platforms,

the Best Workplaces in BFSI in 2023 report by Great Place To Work predicts that the BFSI sector in India is expected to experience a significant increase in hiring activities, with companies planning to hire 26% more employees than the current year. Fintech firms, in particular, are leading the way with a 41% increase in their hiring intent.

What remains crucial in the fast-paced and competitive job market of 2030 is the need for individuals to develop a diverse skill set to thrive. In India, emerging sectors offer promising career prospects. By continuously learning, adapting, and embracing emerging trends, individuals can position themselves for success in the jobs of the future.

The author is founder and COO of Imarticus Learning. Views are personal.

Continued here:

Jobs of 2030; skills to develop for the future in an increasingly competitive world - The Financial Express

Read More..

Explore the List of Data Scientist Openings in the US – Analytics Insight

In the ever-expanding landscape of technology, data has emerged as the lifeblood of innovation, driving decision-making processes across industries. As businesses strive to harness the power of data, the demand for skilled professionals capable of extracting actionable insights continues to surge. Among the most sought-after roles in this data-driven era is that of a Data Scientist. In the United States, the pursuit of excellence in data science has given rise to a multitude of opportunities for individuals passionate about transforming raw data into meaningful narratives.

This article explores the landscape of Data Scientist openings in the US, shedding light on the diverse and exciting possibilities that await those keen on navigating the world of data analytics.

PitchBook

Job Responsibilities:

Work with cross-functional teams to analyze and evaluate data on customer behavior.

Create and deploy innovative data models and algorithms to gain valuable insights.

Predictive and prescriptive analytics may be used to lead data-driven initiatives for improving the customer journey.

Construct and maintain complex datasets derived from customer interactions and engagement. Communicate actionable insights to multiple stakeholders, demonstrating the impact of various business segments on Sales, Customer Success, and overall company.

Keep abreast of market trends and develop analytics methodologies to continually improve our analytics skills.

Mentoring and advising junior data scientists in the team

Customer and marketing data should be analyzed to find patterns, trends, and opportunities throughout the customers lifetime.

Investigate consumer touchpoints and interactions to learn about their preferences and behavior.

Create predictive models that estimate consumer behaviors like churn, lifetime value, conversion rates, and buy proclivity.

Apply here

US Foods

Responsibilities:

You will be required to do the following as an Associate Data Scientist:

Data Preparation: Extract data from diverse databases; do exploratory data analysis; cleanse massage, and aggregate data.

Best Practices and Standards: Ensure that data science features and deliverables are adequately documented and executable for cross-functional consumption.

Collaboration: Work with more senior team members to do ad hoc analyses, collaborate on code and reviews, and provide data narrative.

Model Development and Execution: As needed, monitor model performance and retraining efforts.

Communication: Share findings and thoughts on various data science activities with other members of the data science and decision science teams.

Carry out additional responsibilities as assigned by the manager

Apply here

Disney Entertainment & ESPN Technology

San Francisco, CA

Required Qualifications:

7+ years of analytical experience is required, as well as a Bachelors degree in advanced mathematics, statistics, data science, or a related field of study.

7+ years of expertise in building machine learning models and analyzing data in Python or R

5+ years of experience developing production-level, scalable code (e.g., Python, Scala)

5+ years of experience creating algorithms for production system deployment

In-depth knowledge of contemporary machine learning algorithms (such as deep learning), models, and their mathematical foundations

Comprehensive knowledge of the most recent natural language processing methods and contextualized word embedding models

Experience building and managing pipelines (AWS, Docker, Airflow) as well as designing big-data solutions with technologies such as Databricks, S3, and Spark

Knowledge of data exploration and visualization tools such as Tableau, Looker, and others

Knowledge of statistical principles (for example, hypothesis testing and regression analysis)

Apply here

Asurion

Nashville, TN, USA

Qualifications:

Drive a test-and-learn methodology with a Minimum Viable Product (MVP) and push to learn quickly

Candidate must have the ability to find the root cause, describe, and solve difficult problems in confusing settings

Ability to interact and cooperate with people from many departments inside the organization, ranging from operations teammates to product managers and engineers

Excellent communication (written and spoken) and presentation abilities, especially the ability to create and share complex ideas with peers

The candidate must have creative ideas and arent hesitant to roll up your sleeves to get the job done

Requires a masters degree in analytics, computer science, electrical engineering, computer engineering, or a comparable advanced analytical & optimization discipline, as well as an open mind and an open heart

Familiarity with at least one deep learning framework, such as PyTorch or Tensorflow

Deep Learning and/or Machine Learning expertise earned via academic education or any amount of internship/work experience

Statistics, optimization theoretical principles, and/or optimization problem formulation knowledge acquired via academic coursework or any amount of internship/work experience.

Apply here

Read more from the original source:

Explore the List of Data Scientist Openings in the US - Analytics Insight

Read More..

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say – Reuters

Nov 22 (Reuters) - Ahead of OpenAI CEO Sam Altmans four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

[1/2]Sam Altman, CEO of ChatGPT maker OpenAI, arrives for a bipartisan Artificial Intelligence (AI) Insight Forum for all U.S. senators hosted by Senate Majority Leader Chuck Schumer (D-NY) at the U.S. Capitol in Washington, U.S., September 13, 2023. REUTERS/Julia Nikhinson/File Photo Acquire Licensing Rights

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math where there is only one right answer implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.

In their letter to the board, researchers flagged AIs prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to AGI.

In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight.

"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.

A day later, the board fired Altman.

Anna Tong and Jeffrey Dastin in San Francisco and Krystal Hu in New York; Editing by Kenneth Li and Lisa Shumaker

Our Standards: The Thomson Reuters Trust Principles.

Anna Tong is a correspondent for Reuters based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Tong previously worked at technology startups as a product manager and at Google where she worked in user insights and helped run a call center. Tong graduated from Harvard University.Contact:4152373211

Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history.He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022.

Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.

The rest is here:

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say - Reuters

Read More..

About That Mysterious AI Breakthrough Known As Q* By OpenAI That Allegedly Attains True AI Or Is On The Path Toward Artificial General Intelligence…

intelligence (AGI) nearly in hand?getty

In todays column, I am going to walk you through a prominent AI-mystery that has caused quite a stir leading to an incessant buzz across much of social media and garnering outsized headlines in the mass media. This is going to be quite a Sherlock Holmes adventure and sleuth detective-exemplifying journey that I will be taking you on.

Please put on your thinking cap and get yourself a soothing glass of wine.

The roots of the circumstance involve the recent organizational gyrations and notable business crisis drama associated with the AI maker OpenAI, including the off and on-again firing and then rehiring of the CEO Sam Altman, along with a plethora of related carry-ons. My focus will not particularly be the comings and goings of the parties involved. I instead seek to leverage those reported facts primarily as telltale clues associated with the AI-mystery that some believe sits at the core of the organizational earthquake.

We shall start with the vaunted goal of arriving at the topmost AI.

The Background Of The AI Mystery

So, here's the deal.

Some suggest that OpenAI has landed upon a new approach to AI that either has attained true AI, which is nowadays said to be Artificial General Intelligence (AGI) or that demonstrably resides on or at least shows the path toward AGI. As a fast backgrounder for you, todays AI is considered not yet at the realm of being on par with human intelligence. The aspirational goal for much of the AI field is to arrive at something that fully exhibits human intelligence, which would broadly then be considered as AGI, or possibly going even further into superintelligence (for my analysis on what this AI superhuman aspects might consist of, see the link here).

Nobody has yet been able to find out and report specifically on what this mysterious AI breakthrough consists of (if indeed such an AI breakthrough was at all devised or invented). This situation could be like one of those circumstances where the actual occurrence is a far cry from the rumors that have reverberated in the media. Maybe the reality is that something of modest AI advancement was discovered but doesnt deserve the hoopla that has ensued. Right now, the rumor mill is filled with tall tales that this is the real deal and supposedly will open the door to reaching AGI.

Time will tell.

On the matter of whether the AI has already achieved AGI per se, lets noodle on that postulation. It seems hard to imagine that if the AI became true AGI we wouldnt already be regaled with what it is and what it can do. That would be a chronicle of immense magnitude. Could the AI developers involved be capable of keeping a lid on such a life goal attainment that they miraculously found the source of the Nile or that they essentially turned stone into gold?

Seems hard to believe that the number of people likely knowing this fantastical outcome would be utterly secretive and mum for any considerable length of time.

The seemingly more plausible notion is that they arrived at a kind of AI that shows promise toward someday arriving at AGI. You could likely keep that a private secret for a while. The grand question though looming over this would be the claimed basis for asserting that the AI is in fact on the path to AGI. Such a basis should conceivably be rooted in substantive ironclad logic, one so hopes. On the other hand, perhaps the believed assertion of being on the path to AGI is nothing more than a techie hunch.

Those kinds of hunches are at times hit-and-miss.

You see, this is the way that those ad hoc hunches frequently go. You think youve landed on the right trail, but you are actually once again back in the woods. Or you are on the correct trail, but the top of the mountain is still miles upon miles in the distance. Simply saying or believing that you are on the path to AGI is not necessarily the same as being on said path. Even if you are on the AGI path, perhaps the advancement is a mere inch whilst the distance ahead is still far away. One can certainly rejoice in advancing an inch, dont get me wrong on that. The issue is how much the inch is parlayed into being portrayed intentionally or inadvertently as getting us to the immediate doorstep of AGI.

The Clues That Have Been Hinted At

Now that you know the overarching context of the AI mystery, we are ready to dive into the hints or clues that so far have been reported on the matter. We will closely explore those clues. This will require some savvy Sherlock Holmes AI-considered insights.

A few caveats are worth mentioning at the get-go.

A shrewd detective realizes that some clues are potentially solid inklings, while some clues are wishy-washy or outright misleading. When you are in the fog of war about solving a mystery there is always a chance that you are bereft of sufficient clues. Later on, once the mystery is completely solved and revealed, only then can you look back and discern which clues were on target and which ones were of little use. Alluringly, clues can also be a distraction and take you in a direction that doesnt solve the mystery. And so on.

Given those complications, lets go ahead and endeavor to do the best we can with the clues at this time that seem to be available (more clues are undoubtedly going to leak out in the next few days and weeks; Ill provide further coverage in my column postings as that unfolds).

I am going to draw upon these relatively unsubstantiated foremost three clues:

You can find lots of rampant speculation online that uses only the first of those above clues, namely the name of Q*. Some believe that the mystery can be unraveled on that one clue alone. They might not know about the other two above clues. Or they might not believe that the other two clues are pertinent.

I am going to choose to use all three clues and piece them together in a kind of mosaic that may provide a different perspective than others have espoused online about the mystery. Just wanted to let you know that my detective work might differ somewhat from other narratives you might read about elsewhere online.

The First Clue Is The Alleged Name Of The AI

It has been reported widely that the AI maker has allegedly opted to name the AI software as being referred to by the notation of a capital letter Q that is followed by an asterisk.

The name or notation is this: Q*.

Believe it or not, by this claimed name alone, you can go into a far-reaching abyss of speculation about what the AI is.

I will gladly do so.

I suppose it is somewhat akin to the word Rosebud in the famous classic film Citizen Kane. I wont spoil the movie other than to emphasize that the entire film is about trying to make sense of the seemingly innocuous word of Rosebud. If you have time to do so, I highly recommend watching the movie since it is considered one of the best films of all time. There isnt any AI in it, so realize you would be watching the movie for its incredible plot, splendid acting, eye-popping cinematography, etc., and relishing the deep mystery ardently pursued throughout the movie.

Back to our mystery in hand.

What can we divine from the Q* name?

Those of you who are faintly familiar with everyday mathematical formulations are likely to realize that the asterisk is typically said to represent a so-called star symbol. Thus, the seemingly Q-asterisk name would conventionally be pronounced aloud as Q-star rather than as Q-asterisk. There is nothing especially out of the ordinary in mathematical notations to opt to make use of the asterisk as a star notation. It is done quite frequently, and I will shortly explain why this is the case.

Overall, the use specifically of the letter Q innately coupled with the star representation does not notably denote anything already popularized in the AI field. Ergo, I am saying that Q* doesnt jump out as meaning this particular AI technique or that particular AI technology. It is simply the letter Q that is followed by an asterisk (which we naturally assume by convention represents a star symbol).

Aha, our thinking caps now come into play.

We will separate the letter Q from its accompanying asterisk. Doing so is seemingly productive. Heres why. The capital letter Q does have significance in the AI field. Furthermore, the use of an asterisk as a star symbol does have significance in the mathematics and computer science arena. By looking at the significance of each distinctly, we can subsequently make a reasonable leap of logic as a result of considering the meaning associated when they are combined in unification.

I will start by unpacking the use of the asterisk.

What The Asterisk Or Star Symbol Signifies

One of the most historically well-known uses of the asterisk in a potentially similar context was the use by the mathematician Stephen Kleene when he defined something known as V*. You might cleverly observe that this notation consists of the capital letter V that is followed by the asterisk. It is pronounced as V-star.

In his paper published in the 1950s, he described that suppose you had a set of items that were named by the capital letter V, and you then decided to make a different set that consisted of various combinations associated with the items that are in the set V. This new set will by definition contain all the elements of set V and will show them furthermore in as many concatenated ways as we can come up with. The resulting new set will be denoted as V* (there are other arcane rules about this formulation, but I am only seeking to give a brief tasting herein).

As an example about this matter, suppose that I had a set consisting of the first three lowercase letters of the alphabet: {a, b, c}. I will go ahead and refer to that set as the set V. We have a set V that consists of {a, b, c}.

You are then to come up with V* by making lots of combinations of the elements in V. You are allowed to repeat the elements as much as you wish. Thus, the V* will contain elements like this: {a, b, c, ab, ac, ba, bc, aa, bb, cc, aaa, aab, aac, }.

I trust that you see that the V* is a combination of the elements of V. This V* is kind of amazing in that it has all kinds of nifty combinations. I am not going to get into the details of why this is useful and will merely bring your attention to the fact that the asterisk or star symbol suggests that whatever set V you have there is another set V* that is much richer and fuller. I would recommend that those of you keenly interested in mathematics and computer science might want to see a classic noteworthy article by Stephen Kleene entitled "Representation of Events in Nerve Nets and Finite Automata" which was published by Princeton University Press in 1956. You can also readily find lots of explanations online about V*.

Your overall takeaway here is that when you use a capital letter and join it with an asterisk, the conventional implication in mathematics and computer science is that you are saying that the capital letter is essentially supersized. You are magnifying whatever the original thing is. To some degree, you are said to be maximizing it to the nth degree.

Are you with me on this so far?

I hope so.

Lets move on and keep this asterisk and star symbol stuff in mind.

The Use Of Asterisk Or Star In The Case Of Capital A

You are going to love this next bit of detective work.

Ive brought you up-to-speed about the asterisk and showed you an easy example involving the capital letter V. Well, in the AI field, there is a famous instance that involves the capital letter A. We have hit a potential jackpot regarding the underlying mystery being solved, some believe.

Allow me to explain.

The famous instance of the capital letter A which is accompanied by an asterisk in the field of AI is shown this way: A*. It is pronounced as A-star.

As an aside, when I was a university professor, I always taught A* in my university classes on AI for undergraduates and graduates. Any budding computer science student learning about AI should be at least aware of the A* and what it portends. This is a foundational keystone for AI.

In brief, a research paper in the 1960s proposed an AI foundational approach to a difficult mathematical problem such as trying to find the shortest path to get from one city to another city. If you are driving from Los Angeles to New York and you have lets assume thirty cities that you might go through to get to your destination, which cities would you pick to minimize the time or distance for your planned trip?

You certainly would want to use a mathematical algorithm that can aid in calculating the best or at least a really good path to take. This also relates to the use of computers. If you are going to use a computer to figure out the path, you want a mathematical algorithm that can be programmed to do so. You want that mathematical algorithm to be implementable on a computer and run as fast as possible or use the least amount of computing resources as you can.

The classic paper that formulated A* is entitled A Formal Basis for the Heuristic Determination of Minimum Cost Paths by Peter Hart, Nils Nilsson, and Bertram Raphael, published in IEEE Transactions on Systems Science and Cybernetics, 1968. The researchers said this:

The paper proceeds to define the algorithm that they named as A*. You can readily find online lots and lots of descriptions about how A* works. It is a step-by-step procedure or technique. Besides being useful for solving travel-related problems, the A* is used for all manner of search-related issues. For example, when playing chess, you can think of finding the next chess move as a search-related problem. You might use A* and code it into part of a chess-playing program.

You might be wondering whether the A* has a counterpart possibly known as simply A. In other words, I mentioned earlier that we have V* which is a variant or supersizing of V. Youll be happy to know that some believe that A* is somewhat based on an algorithm which is at times known as A.

Do tell, you might be thinking.

In the 1950s, the famous mathematician and computer scientist Edsger Dijkstra came up with an algorithm that is considered one of the first articulated techniques to figure out the shortest paths between various nodes in a weighted graph (once again, akin to the city traveling problem and more).

Interestingly, he figured out the algorithm in 1956 while sitting in a caf in Amsterdam and according to his telling of how things arose, the devised technique only took about twenty minutes for him to come up with. The technique became a core part of his lifelong legacy in the field of mathematics and computer science. He took his time to write it up. He published a paper about it three years later, and it is a highly readable and mesmerizing read, see E. W. Dijkstra, "A Note on Two Problems in Connection with Graphs", published in Numerische Mathematik, 1959.

Some have suggested that the later devised A* is essentially based on the A of his works. There is a historical debate about that. What can be said with relative sensibility is that the A* is a much more extensive and robust algorithm for doing similar kinds of searches. Ill leave things there and not get mired in the historical disputes.

Id like to add two more quick comments about the use of the asterisk symbol in the computer field.

First, those of you who happen to know coding or programming or the use of computer commands are perhaps aware that a longstanding use of the asterisk has been as a wildcard character. This is pretty common. Suppose I want to inform you that you are to identify all the words that can be derived based on the root word or letters dog. For example, you might come up with the word doggie or the word dogmatic. I could succinctly tell you what you can do by putting an asterisk at the end of the root word, like this: dog*. The asterisk is considered once again to be a star symbol and implies that you can put whatever letters you want after the first fixed set of three letters of dog.

Secondly, another perspective on the asterisk when used with a capital letter is that it is the last or furthest possible iteration or version of something. Lets explore this. Suppose I make a piece of software and I decide to refer to it via the capital letter B. My first version might be referred to as B1. My second version might be referred to as B2. On and on this goes. I might later on have B26, the twenty-sixth version, and much later maybe B8245 which is presumably the eight thousand two hundred forty-fifth version.

A catchy or cutesy way to refer to the end of all of the versions might be to say B*. The asterisk or star symbol in this case tells us that whatever is named as B* is the highest or final of all of the versions that we could ever come up with.

I will soon revisit these points and show you why they are part of the detective work.

The Capital Letter Q Is Considered A Hefty Clue

You are now aware of the asterisk or star symbol. Congratulations!

We need to delve into the capital letter Q.

The seemingly most likely reference to the capital letter Q that exists in the field of AI would indubitably be something known as Q-learning. Some have speculated that the Q might instead be a reference to the work of the famous mathematician Richard Bellman and his optimal value function in the Bellman equation. Sure, I get that. We dont know if thats the reference being made. Im going to make a detective instinctive choice and steer toward the Q that is in Q-learning.

Im using my Ouija board to help out.

Sometimes it is right, sometimes it is wrong.

Q-learning is an important AI technique. Once again, it is a topic that I always covered in my AI classes and that I expected my students to know by heart. The technique makes use of reinforcement learning. You are already generally aware of reinforcement learning by your likely life experiences.

Lets make sure you are comfortable with the intimidatingly fancy phrase reinforcement learning.

Suppose you are training a dog to perform a handshake or shall we say paw shake. You give the dog a verbal command such as telling the cute puppy to do a handshake. The dog lifts its tiny paw to touch your outreached hand. To reward this behavior, you give the dog a delicious canine treat.

You continue doing this repeatedly. The dog is rewarded with a treat for each time that it performs the heartwarming trick. If the dog doesnt do the trick when commanded, you dont provide the treat. In a sense, the denial of a treat is almost a penalty too. You could have a more explicit penalty such as scowling at the dog, but usually, the more advisable course of action is to focus on rewards rather than also including explicit penalties.

All in all, the dog is being taught by reinforcement learning. You are reinforcing the behavior you desire by providing rewards. The hope is that the dog is somehow within its adorable canine brain getting the idea that doing a handshake is a good thing. The internal mental rules that the dog is perhaps devising are that when the command to do a handshake is spoken, the best bet is to lift its handy paw since doing so is amply rewarded.

Q-learning is an AI technique that seeks to leverage reinforcement learning in a computer or is said to be implemented computationally.

The algorithm consists of mathematically and computationally examining a current state or step and trying to figure out which next state or step would be the best to undertake. Part of this consists of anticipating the potential future states or steps. The idea is to see if the rewards associated with those future states can be added up and provide the maximum attainable reward.

You presumably do something like this in real life.

Consider this. If I choose to go to college, I might get a better-paying job than if I dont go to college. I might also be able to buy a better house than if I didnt go to college. There are lots of possible rewards so I might add them all up to see how much that might be. That is one course or sequence of steps and maybe it is good for me or maybe there is something better.

If I dont go to college, I can start working in my chosen field of endeavor right away. I will have four years of additional work experience prior to those that went to college. It could be that those four years of experience will give me a long-lasting advantage over having used those years to go to college. I consider the down-the-road rewards associated with that path.

Upon adding up the rewards for each of those two respective paths, I might decide that whichever path has the maximum calculated reward is the better one for me to pick. You might say that I am adding up the expected values. To make things more powerful, I might decide to weight the rewards. For example, I mentioned that I am considering how much money I will make. It could be that I also am considering the type of lifestyle and work that I will do. I could give greater weight to the type of lifestyle and work while giving a bit less weight to the money side of things.

The formalized way to express all of this is that an agent, which in the example is me, will be undertaking a series of steps, which we will denote as states, and taking actions that transition the agent from one state to the next state. The goal of the agent entails maximizing a total reward. Upon each state or step taken, a reevaluation will occur to recalculate which next step or state seems to be the best to take.

Notice that I did not beforehand know for sure which would be the best or right steps to take. I am going to make an estimate at each state or step. I will figure things out as I go along. I will use each reward that I encounter as a further means to ascertain the next state or step to take.

Given that description, I hope you can recognize that perhaps the dog that is learning to do a handshake is doing something similar to this (we cant know for sure). The dog has to decide at each repeated trial whether to do the handshake. It is reacting in the moment, but also perhaps anticipating the potential for future rewards too. We do not yet have a means to have the dog tell us what it is thinking so we dont know for sure what is happening in that mischievous canine mind.

I want to proffer a few more insights about Q-learning and then we will bring together everything that I have so far covered. We need to steadfastly keep in mind that we are on a quest. The quest involves solving the mystery of the alleged AI that might be heading us toward AGI.

Q-learning is often depicted as making use of a model-free and off-policy approach to reinforcement learning. Thats a mouthful. We can unpack it.

Here are some of my off-the-cuff definitions that are admittedly loosey-goosey but I believe are reasonably expressive of the model and policy facets associated with Q-learning (I ask for forgiveness from the strict formalists that might view this as somewhat watered down):

Take a look at those definitions. I have noted in italics the model-free and the off-policy. I also gave you the opposites, namely model-based and the on-policy approaches since those are each respectively potentially contrasting ways of doing things. Q-learning goes the model-free and off-policy route.

The significance is that Q-learning proceeds on a trial-and-error basis (considered to be model-free) and tries to devise rules while proceeding ahead (considered to be off-policy). This is a huge plus for us. You can use Q-learning without having to in advance come up with a pre-stipulated model of how it is supposed to do things. Likewise, you dont have to come up with a bunch of rules beforehand. The overall algorithm proceeds to essentially get things done on the fly as the activity proceeds and self-derives the rules. Of related noteworthiness is that the Q-learning approach makes use of data tables and data values that are known as Q-tables and Q-values (i.e., the capital letter Q gets a lot of usage in Q-learning).

Okay, I appreciate that you have slogged through this perhaps obtuse or complex topic.

Your payoff is next.

The Mystery Of Q* In Light Of Q And Asterisks

You now have a semblance of what an asterisk means when used with a capital letter. Furthermore, I am leaning you toward assuming that the capital letter Q is a reference to Q-learning.

Lets jam together the Q and the asterisk and see what happens, namely this: Q*.

The combination might mean this. The potential AI breakthrough is labeled as Q because it has to do with the Q-learning technique, and maybe the asterisk or star symbol is giving us a clue that the Q-learning is somehow been advanced to a notably better version or variant. The asterisk might suggest that this is the highest or most far-out capability of Q-learning that anyone has ever seen or envisioned.

Wow, what an exciting possibility.

This would imply that the use of reinforcement learning as an AI-based approach and that is model-free and off-policy can leap tall buildings and go faster than a speeding train (metaphorically) to being able to push AI closer to being AGI. If you place this into the context of generative AI such as ChatGPT by OpenAI and GPT-4 of OpenAI, perhaps those generative AI apps could be much more fluent and seem to convey reasoning if they had this Q* included into them (or this might be included into the GPT-5 that is rumored to be under development).

If only OpenAI has this Q* breakthrough (if there is such a thing), and if the Q* does indeed provide a blockbuster advantage, presumably this gives OpenAI a substantial edge over their competition. This takes us to an intriguing and ongoing AI ethics question. For my ongoing and extensive coverage of AI ethics and AI law, see the link here and the link here, just to name a few.

More:

About That Mysterious AI Breakthrough Known As Q* By OpenAI That Allegedly Attains True AI Or Is On The Path Toward Artificial General Intelligence...

Read More..

The Open AI Drama: What Is AGI And Why Should You Care? – Forbes

Evolution of humans and intelligence

Pixabay

Artificial general intelligence is something everyone should know and think about. This was true even before the recent OpenAI drama brought the issue to the limelight, with speculation that the leadership shakeup may have been due to disagreements about safety concerns regarding a breakthrough on AGI. Whether that is true or notand we may never knowAGI is still serious. All of which begs the questions: what exactly is AGI, what does it mean to all of us, and whatif anythingcan the average person do about it?

As expected for such a complex and impactful topic, definitions vary:

Given the recent OpenAI news, it is particularly opportune that OpenAIs chief scientist, Ilya Sutskever, actually presented his perspective on AGI just a few weeks ago at TED AI. You can find his full presentation here, but some takeaways:

As we can see, AGI spans many dimensions. The ability to perform generalized tasks implies that AGI will affect the job market far more than the AIs that preceded it. For example, an AI that can read an X-ray and detect disease can assist doctors in their work. However, an AGI that can read the X-ray, understand the patients personal history, make a recommendation and explain that recommendation to the patient with a kind beside manner could conceivably replace the doctor entirely. The potential benefits and risks to world economies and jobs are massive. Add to those the ability for AGIs to learn and produce new AGIs, and the risk becomes existential. It is not clear how humanity would control such an AGI or what decisions it would make for itself.

Hard to say. Experts differ in whether AGI is never likely to happen or whether it is merely a few years away. For example, Geoff Hinton, winner of the Turing Award (the highest prize in computer science), believes AGI is less than 20 years away but that it will not present an existential threat. Meanwhile, his fellow winner of the same award, Yoshua Bengio, states that we do not know how many decades it will take to reach AGI. Much of this discrepancy also has to do with the lack of a broadly agreed-upon definition, as the examples above show.

Yes, I believe so. If nothing else, this weeks drama at OpenAI shows how little we know about the technology development that is so fundamental to humanitys futureand how unstructured our global conversation on the topic is. Fundamental questions exist, such as:

Who will decide if AGI has been reached?

Would we even know that it has happened or is imminent?

What measures will be in place to manage it?

How will countries around the world collaborate or fight over it?

And so on.

For those not following The Terminator franchise, Skynet is a fictional, human-created, machine network that becomes self-aware and decides to destroy humanity. I dont think this is cause for major concern. While certain parts of the AGI definition (particularly the idea of AGIs creating future AGIs) are heading in this direction, and while movies like The Terminator show a certain view of the future, history has shown us that harm caused by technology is usually caused by intentional or accidental human misuse of the technology. AGI may eventually reach some form of consciousness that is independent of humans, but it seems far more likely that human-directed AI-powered weapons, misinformation, job displacement, environmental disruption, etc. will threaten our well-being before that.

I believe the only thing each of us can do is to be informed, be AI-literate and exercise our rights, opinions and best judgement. The technology is transformative. What is not clear is who will decide how it will transform.

Along these lines, less than a month ago, U.S. President Joe Biden issued an executive order on AI, addressing a wide range of near-term AI concerns from individual privacy to responsible AI development to job displacement and necessary upskilling. While not targeted directly at AGI, these orders and similar legislation can direct responsible AI development in the short termprior to AGIand hopefully continuing through to AGI.

It is also worth noting that AGI is unlikely to be a binary eventone day not there and the next day there. ChatGPT appeared to many people as if it came from nowhere, but it did not. It was preceded in 2019 and 2020 by GPT 2 and GPT 3. Both were very powerful but harder to use and far less well known. While ChatGPT (GPT3.5 and beyond) represented major advances, the trend was already in place.

Similarly, we will see AGI coming. For example, a Microsoft research team recently reported that GPT-4 has shown signs of human reasoning, a step toward AGI. As expected, these reports are often disputed, with others claiming that such observations are more indicative of imperfect testing methodologies than of actual AGI.

The real question is: What will we do about about AGI before it arrives?

That decision should be made by everyone. The OpenAI drama continues, with new developments daily. However, no matter what happens with OpenAI, the AGI debate and issues are here to stay, and we will need to deal with themideally, sooner rather than later.

I am an entrepreneur and technologist in the AI space and the CEO of AIClub and AIClubPro - pioneering AI Literacy for K-12 students and individuals worldwide (https://corp.aiclub.world and https://aiclubpro.world). I am also the author of Fundamentals of Artificial Intelligence - the first AI Textbook for Middle School and High School students.

Previously, I co-founded ParallelM and defined MLOps (Production Machine Learning and Deep Learning). MLOps is the practice for full lifecycle management of Machine Learning and AI in production. My background is in software development for distributed systems, focusing on machine learning, analytics, storage, I/O, file systems, and persistent memory. Prior to PM, I was Lead Architect/Fellow at Fusion-io (acquired by SanDisk), developing new technologies and software stacks for persistent memory, Non-Volatile Memory File System (NVMFS) and application acceleration. Before Fusion-io, I was the technology lead for server flash at Intel - heading up server platform non volatile memory technology development and partnerships and foundational work on NVM Express.

Before that, I was Chief Technology Officer at Gear6, where we built clustered computing caches for high performance I/O environments. I got my PhD at UC Berkeley doing research on clusters and distributed storage. I hold 63 patents in distributed systems, networking, storage, performance, key-value stores, persistent memory and memory hierarchy optimization. I enjoy speaking at industry and academic conferences and serving on conference program committees. I am currently co-chairing USENIX OpML 2019 - the first industry conference on Operational Machine Learning. I also serve on the steering committees of both OpML and HotStorage.

See more here:

The Open AI Drama: What Is AGI And Why Should You Care? - Forbes

Read More..

AI doesnt cause harm by itself. We should worry about the people who control it – The Guardian

Opinion

The chaos at OpenAI reveals contradictions in the way we think about the technology

Sun 26 Nov 2023 02.30 EST

At times it felt less like Succession than Fawlty Towers, not so much Shakespearean tragedy as Laurel and Hardy farce. OpenAI is the hottest tech company today thanks to the success of its most famous product, the chatbot ChatGPT. It was inevitable that the mayhem surrounding the sacking, and subsequent rehiring, of Sam Altman as its CEO would play out across global media last week, accompanied by astonishment and bemusement in equal measure.

For some, the farce spoke to the incompetence of the board; for others, to a clash of monstrous egos. In a deeper sense, the turmoil also reflected many of the contradictions at the heart of the tech industry. The contradiction between the self-serving myth of tech entrepreneurs as rebel disruptors, and their control of a multibillion-dollar monster of an industry through which they shape all our lives. The tension, too, between the view of AI as a mechanism for transforming human life and the fear that it may be an existential threat to humanity.

Few organisations embody these contradictions more than OpenAI. The galaxy of Silicon Valley heavyweights, including Elon Musk and Peter Thiel, who founded the organisation in 2015, saw themselves both as evangelists for AI and heralds warning of the threat it posed. With artificial intelligence we are summoning the demon, Musk portentously claimed.

The combination of unrestrained self-regard for themselves as exceptional individuals conquering the future, and profound pessimism about other people and society has made fear of the apocalypse being around the corner almost mandatory for the titans of tech. Many are preppers, survivalists prepared for the possibility of a Mad Max world. I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to, Altman told the New Yorker shortly after OpenAI was created. The best entrepreneurs, he claimed, are very paranoid, very full of existential crises. Including, inevitably, about AI.

OpenAI was created as a non-profit-making charitable trust, the purpose of which was to develop artificial general intelligence, or AGI, which, roughly speaking, is a machine that can accomplish, or surpass, any intellectual task humans can perform. It would do so, however, in an ethical fashion to benefit humanity as a whole.

Then, in 2019, the charity set up a for-profit subsidiary to help raise more investment, eventually pulling in more than $11bn (8.7bn) from Microsoft. The non-profit parent organisation, nevertheless, retained full control, institutionalising the tension between the desire to make a profit and doomsday concerns about the products making the profit. The extraordinary success of ChatGPT only exacerbated that tension.

Two years ago, a group of OpenAI researchers left to start a new organisation, Anthropic, fearful of the pace of AI development at their old company. One later told a reporter that there was a 20% chance that a rogue AI would destroy humanity within the next decade. That same dread seems to have driven the attempt to defenestrate Altman and the boardroom chaos of the past week.

One may wonder about the psychology of continuing to create machines that one believes may extinguish human life. The irony, though, is that while fear of AI is exaggerated, the fear itself poses its own dangers. Exaggerated alarm about AI stems from an inflated sense of its capabilities. ChatGPT is superlatively good at predicting what the next word in a sequence should be; so good, in fact, that we imagine we can converse with it as with another human. But it cannot grasp, as humans do, the meanings of those words, and has negligible understanding of the real world. We remain far from the dream of artificial general intelligence. AGI will not happen, Grady Booch, chief scientist for software engineering at IBM, has suggested, even in the lifetime of your childrens children.

For those in Silicon Valley who disagree, believing AGI to be imminent, humans need to be protected through alignment ensuring that AI is aligned with human values and follows human intent. That may seem a rational way of countervailing any harm AI might cause. Until, that is, you start asking what exactly are human values, who defines them, and what happens when they clash?

Social values are always contested, and particularly so today, in an age of widespread disaffection driven often by the breakdown of consensual standards. Our relationship to technology is itself a matter for debate. For some, the need to curtail hatred or to protect people from online harm outweighs any rights to free speech or privacy. This is the sentiment underlying Britains new Online Safety Act. Its also why many worry about the consequences of the law.

Then there is the question of disinformation. Few people would deny that disinformation is a problem and will become even more so, raising difficult questions about democracy and trust. The question of how we deal with it remains, though, highly contentious, especially as many attempts to regulate disinformation results in even greater powers being bestowed on tech companies to police the public.

Meanwhile, another area of concern, algorithmic bias, highlights the weaknesses of arguments for alignment. The reason algorithms are prone to bias, especially against minorities, is precisely because they are aligned to human values. AI programmes are trained on data from the human world, one suffused with discriminatory practices and ideas. These become embedded into AI software, too, whether in the criminal justice system or healthcare, facial recognition or recruitment.

The problem we face is not that machines may one day exercise power over humans. That is speculation unwarranted by current developments. It is rather that we already live in societies in which power is exercised by a few to the detriment of the majority, and that technology provides a means of consolidating that power. For those who hold social, political and economic power, it makes sense to project problems as technological rather than social and as lying in the future rather than in the present.

There are few tools useful to humans that cannot also cause harm. But they rarely cause harm by themselves; they do so, rather, through the ways in which they are exploited by humans, especially those with power. That, and not fantasy fears of extinction, should be the starting point for any discussion about AI.

Kenan Malik is an Observer columnist

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

One-timeMonthlyAnnual

Other

See the original post here:

AI doesnt cause harm by itself. We should worry about the people who control it - The Guardian

Read More..

In Two Minds: Towards Artificial General Intelligence and Conscious … – Lexology

Thought for the day

Ive been extremely busy of late and finding time to just *think* has been a challenge.

In the rare moments Ive found time to think, Ive therefore ended up thinking about thinking what do I really mean by thinking? why is it important to me to find time to think? and what am I actually doing when Im thinking? Given that my days are saturated with AI projects, it isnt at all surprising that ideas about AI have strongly coloured my ideas and considerations about consciousness, or how to define conscious machines, have also arisen in a few of the AI projects that Ive been working on.

For me at least, thinking involves what is often referred to as ones internal monologue. In truth, I tend to experience this more as an internal dialogue, with one thought being expressed by an inner voice, which then prompts responses or questions raised by that same voice but as if from a separate point of view. Not everyone seems to experience this there are plenty of reports of people who apparently rarely experience any internal monologue in words, but find their thoughts are more emotional or a series of competing motivations for the most part.

Another area that seems to be a point of difference is what is sometimes described as a minds eye the ability to clearly imagine an object or visualise relationships or processes. I find this fascinating, as I experience this strongly. When thinking I am very likely to start to visualise something in some virtual space of my own conjuring alongside that inner dialogue, with the image, diagram or process being modified in my imagination in response to that ongoing dialogue. Many people, including my own dear wife, have no similar experience and insist that they have no minds eye that they recognise. However, when I questioned her about an internal monologue, it was an inner voice saying I really dont think I have one that confirmed for her that she does experience thought in a monologue/dialogue modality!

Mechanical Minds

It seems to me that these aspects of an inner mental life, of a continuous experience of thought (whether expressed as a dialogue, or as a more difficult to express series of concepts, emotions or motivations that dont neatly crystallise into language), are a critical missing component of todays large language models (LLMs).

Being simplistic, a typical transformer based LLM is a system that is fed an input (one or more words) and generates a single output (the most probable next word). In order to generate longer passages, the system simply feeds each output in with the previous input to generate more words until a special end token is generated. At that point, the machine stops, its task completed until further user input is provided.

As a result, in current LLMs a key element of consciousness continuous thought is noticeably absent. These models operate in a state of dormancy, awakening only upon receiving a prompt, without any ongoing internal narrative or a thread of consciousness that links their responses over time.

This operational mode starkly contrasts with the human minds continuous stream of consciousness, (at least outside periods of sleep) and is characterised by an uninterrupted flow of thoughts, feelings, and awareness. The very fact that the meditative command to clear your mind is considered so difficult speaks to this common experience of thought crowding in.

The lack of this continuity in LLMs is a significant divergence from the human experience of consciousness, which is defined not just by responses to stimuli but also by ongoing internal processes.

The Dialogic Mind

Imagine a system where two LLMs engage in an ongoing dialogue, akin to that internal conversation I described as representative of my own experience of thought. In this proposed architecture, upon receipt of a prompt, one LLM would generate a response which the second LLM would then critically evaluate, challenge, or enhance. The first LLM would then do the same for the second LLMs output, and a dialogue would continue with the response being surfaced to the user only when agreed between the two LLMs.

This interaction would mirror the internal dialogue characteristic of human thought processes, where thoughts and ideas are constantly being formed, questioned, and refined. Each LLM in this dialogic setup could represent a different voice or perspective within the mind, contributing to a more dynamic and complex process of thought generation and evaluation. The potential for this approach to more closely resemble the multifaceted nature of human thinking is significant, offering a step towards replicating the complexity and richness of human cognitive processes in machines.

This dialogic system approach offers multiple potential benefits. Among these are that it promises a richer understanding of context, as the conversation between the two models ensures that responses are not simply reactionary, but reflective and considerate of broader contexts. This dynamic could lead to more relevant and accurate responses, more closely aligned with the users intent and the nuances of the query.

Additionally, it could operate to mitigate errors and hallucinatory responses. The second LLM would serve as a critical reviewer of the firsts output (and vice versa), ensuring responses are logical, relevant, and free from undesirable elements. This verification process, guided by high-level fixed objectives for the system like relevance, safety, and accuracy, adds a layer of quality control that is currently missing in single-LLM systems.

To work really effectively, the two LLMs would have to be different in some ways, whether in terms of the underlying weights and biases, or the hyperparameters and context each has at the outset. A system built from two instances of the same LLM (or simulated using one LLM asked to play both roles) would be likely to agree too readily and undermine any potential advantages. In addition, the benefits of a continuing consciousness described below might be undermined if two too-similar machines simply got into a conversational loop.

Enhancing Explainability

One particular potential advantage is in the area of explainability.

In the evolving landscape of AI, explainability stands as a critical challenge, particularly in the context of AI regulation. Weve seen explainability cited as a specific challenge in almost every policy paper and regulation. The dialogic model of AI, where two LLMs engage in an internal conversation, holds significant promise in advancing explainability. This aspect is not just a technical improvement; its a vital step toward meeting regulatory requirements and public expectations for transparent AI systems.

At the core of this models benefit is the ability to open the black box of AI decision-making. By accessing and analysing the dialogue between the two LLMs, we can observe the initial output, the challenge-response process, and understand the formation of the final output. This approach allows us to unravel the thought process of the AI, akin to witnessing the cognitive journey a human decision-maker undergoes.

This level of insight into an AIs decision-making is analogous to, and in some ways surpasses, the explainability we expect from human decision-makers. Humans, when asked to articulate their decision-making process, often struggle to fully capture the nuances of their thought processes, which are influenced by a myriad of conscious and unconscious factors. Humans are inherently black box decision-makers, occasionally prone to irrational or emotionally driven decisions. In contrast, the dialogic AI model provides a more tangible and accessible record of its reasoning.

Being able to read the machines mind in this way represents a significant leap in the transparency of decision-making. It surpasses the often opaque and retroactively generated explanations provided by human decision-makers. This enhanced transparency is not just about understanding how and why an AI system reached a particular conclusion; its also about being able to identify and rectify biases, errors, or other areas of concern. Such a capability is invaluable in auditing AI decisions, enhancing accountability, and fostering a deeper trust in AI systems among users and regulators alike.

Therefore, the dialogic models contribution to explainability is multifaceted. It not only addresses a fundamental challenge in the field of AI but also sets a new standard for decision-making transparency that, in some respects, goes beyond what is currently achievable with human decision-makers. This progress in explainability is a critical step in aligning AI systems more closely with societal expectations and ethical standards.

Towards a Continuous Machine Consciousness

The continuous interaction between the two LLMs in a dialogic system would raise questions about it having to be viewed as a form of machine consciousness. Unlike current models that react passively to inputs, these LLMs would engage actively with each other, creating a semblance of an ongoing internal dialogue. By integrating additional sensory inputs such as visual, auditory, and contextual data these models could develop a more holistic understanding of their environment. This approach could lead to AI that understands not only text but can interpret a range of cues like facial expressions, vocal tones, and environmental contexts, moving closer to a form of embodied AI that possesses awareness of its surroundings.

Consider the interactions we never see from current input-output LLMs, but might see in dialogue with a human answering, and then following up on their own answer with further thoughts expanding on their point after a short period, chasing the other person for a response or to check they were still there if there was a long pause, changing their mind after further reflection. Our LLM pair in continuing dialogue could manifest all of these behaviours.

At the same time, continuous thought carries with it a greater probability that other emergent properties could arise agentive behaviours, development and pursuit of goals, power seeking, etc. A full consideration of the control problem is beyond the scope of this short piece, but these are factors that need to be considered and addressed

In itself this asks uncomfortable questions about the nature and status of such a machine. At a point where the machine has a continuing experience, what moral significance attaches to the act of pausing it or resetting it? While it has been relatively easy to dismiss apparent claims of a subjective desire to remain in operation from todays LLMs given that (outside of short burst when responding to user input) they have no continued experience, would the same be true of our dialogic LLM, especially if there is evidence that it is continually thinking and experiencing the world?

The dual-LLM system concept reflects a deeper duality at the heart of our own experience. The human brains structure with its two hemispheres, each playing a distinct role in cognitive processes, means that each of us is really two beings in one (albeit, in humans, it appears that the centres of language depend far more heavily on the left hemisphere). Just as our left and right hemispheres work together to form a cohesive cognitive experience, the two LLMs could complement each others strengths and weaknesses, leading to a more balanced and comprehensive AI system. This analogy to the human brains structure is not just symbolic; it could provide insights into how different cognitive processes can be integrated to create a more sophisticated and capable AI.

Beyond Dualism

While a two-LLM system represents an efficient balance between mimicking human-like consciousness and computational feasibility, the potential extends far beyond this. Envision a network where multiple LLMs, each specialised in different areas, contribute to the decision-making process. This could lead to an AI system with a depth and breadth of knowledge and understanding far surpassing current models. However, this increase in complexity would demand significantly more computational power and could result in slower response times. Therefore, while a multi-LLM system offers exciting possibilities, the dual-LLM model might present the most practical balance between simulating aspects of consciousness and maintaining operational efficiency.

These advancements in LLM architecture not only promise more effective and reliable AI systems but also offer a window into understanding and replicating the intricate nature of human thought and consciousness. By embracing these new models, we step closer to bridging the gap between artificial and human intelligence, unlocking new possibilities in the realm of AI development.

The Future Is Now

All of this may be nearer than we think. The pace of progress in AI has been and continues to be absolutely breathtaking.

There is at least one high profile model coming soon with a name that is redolent of twins. And this type of continued consciousness dialogue-based machine is not necessarily a huge evolution of the mixture of experts architectures which have been used to allow very large networks with expertise across many domains to run more efficiently.

Sceptics who think Artificial General Intelligence remains decades out may find that a conscious machine is here sooner than they think.

See original here:

In Two Minds: Towards Artificial General Intelligence and Conscious ... - Lexology

Read More..

Forget dystopian scenarios AI is pervasive today, and the risks are … – The Conversation

The turmoil at ChatGPT-maker OpenAI, bookended by the board of directors firing high-profile CEO Sam Altman on Nov. 17, 2023, and rehiring him just four days later, has put a spotlight on artificial intelligence safety and concerns about the rapid development of artificial general intelligence, or AGI. AGI is loosely defined as human-level intelligence across a range of tasks.

The OpenAI board stated that Altmans termination was for lack of candor, but speculation has centered on a rift between Altman and members of the board over concerns that OpenAIs remarkable growth products such as ChatGPT and Dall-E have acquired hundreds of millions of users worldwide has hindered the companys ability to focus on catastrophic risks posed by AGI.

OpenAIs goal of developing AGI has become entwined with the idea of AI acquiring superintelligent capabilities and the need to safeguard against the technology being misused or going rogue. But for now, AGI and its attendant risks are speculative. Task-specific forms of AI, meanwhile, are very real, have become widespread and often fly under the radar.

As a researcher of information systems and responsible AI, I study how these everyday algorithms work and how they can harm people.

AI plays a visible part in many peoples daily lives, from face recognition unlocking your phone to speech recognition powering your digital assistant. It also plays roles you might be vaguely aware of for example, shaping your social media and online shopping sessions, guiding your video-watching choices and matching you with a driver in a ride-sharing service.

AI also affects your life in ways that might completely escape your notice. If youre applying for a job, many employers use AI in the hiring process. Your bosses might be using it to identify employees who are likely to quit. If youre applying for a loan, odds are your bank is using AI to decide whether to grant it. If youre being treated for a medical condition, your health care providers might use it to assess your medical images. And if you know someone caught up in the criminal justice system, AI could well play a role in determining the course of their life.

Many of the AI systems that fly under the radar have biases that can cause harm. For example, machine learning methods use inductive logic, which starts with a set of premises, to generalize patterns from training data. A machine learning-based resume screening tool was found to be biased against women because the training data reflected past practices when most resumes were submitted by men.

The use of predictive methods in areas ranging from health care to child welfare could exhibit biases such as cohort bias that lead to unequal risk assessments across different groups in society. Even when legal practices prohibit discrimination based on attributes such as race and gender for example, in consumer lending proxy discrimination can still occur. This happens when algorithmic decision-making models do not use characteristics that are legally protected, such as race, and instead use characteristics that are highly correlated or connected with the legally protected characteristic, like neighborhood. Studies have found that risk-equivalent Black and Latino borrowers pay significantly higher interest rates on government-sponsored enterprise securitized and Federal Housing Authority insured loans than white borrowers.

Another form of bias occurs when decision-makers use an algorithm differently from how the algorithms designers intended. In a well-known example, a neural network learned to associate asthma with a lower risk of death from pneumonia. This was because asthmatics with pneumonia are traditionally given more aggressive treatment that lowers their mortality risk compared to the overall population. However, if the outcome from such a neural network is used in hospital bed allocation, then those with asthma and admitted with pneumonia would be dangerously deprioritized.

Biases from algorithms can also result from complex societal feedback loops. For example, when predicting recidivism, authorities attempt to predict which people convicted of crimes are likely to commit crimes again. But the data used to train predictive algorithms is actually about who is likely to get re-arrested.

The Biden administrations recent executive order and enforcement efforts by federal agencies such as the Federal Trade Commission are the first steps in recognizing and safeguarding against algorithmic harms.

And though large language models, such as GPT-3 that powers ChatGPT, and multimodal large language models, such as GPT-4, are steps on the road toward artificial general intelligence, they are also algorithms people are increasingly using in school, work and daily life. Its important to consider the biases that result from widespread use of large language models.

For example, these models could exhibit biases resulting from negative stereotyping involving gender, race or religion, as well as biases in representation of minorities and disabled people. As these models demonstrate the ability to outperform humans on tests such as the bar exam, I believe that they require greater scrutiny to ensure that AI-augmented work conforms to standards of transparency, accuracy and source crediting, and that stakeholders have the authority to enforce such standards.

Ultimately, who wins and loses from large-scale deployment of AI may not be about rogue superintelligence, but about understanding who is vulnerable when algorithmic decision-making is ubiquitous.

View original post here:

Forget dystopian scenarios AI is pervasive today, and the risks are ... - The Conversation

Read More..

Role of AI on the Battlefield Debated as Putin Stakes New Policy Position – Decrypt

As world leaders band together to shape shared policies around the development of artificial intelligence, policymakers are looking to leverage the technology on the battlefields of the futureincluding Russian President Vladimir Putin.

The West should not be allowed to monopolize AI, Putin said during a recent conference on AI, signaling that he would advance an ambitious Russian AI strategy, according to a Friday report by Reuters.

"In the very near future, as one of the first steps, a presidential decree will be signed, and a new version of the national strategy for the development of artificial intelligence will be approved," Putin said during the Artificial Intelligence Journey conference.

The competition among Microsoft, Google, and Amazon to bring more advanced AI to the masses has been compared to a nuclear arms race, even as an actual AI arms race is unfolding between the United States and China. On that front, top U.S. military contractorsincluding Lockheed Martin, General Dynamics, and Raytheonare developing AI tech for military operations.

Another company working on combat AI is San Diego-based Shield AI, recently featured in the Netflix documentary Unknown: Killer Robots.

Shield AI is an American aerospace and defense technology company founded by brothers Brandon Tseng and Ryan Tseng, along with Andrew Reiter in 2015. Shield AI is responsible for the Nova line of unmanned aerial vehicles (UAV) that the U.S. military already uses in urban environments where GPS or radio frequencies are unavailable.

While automated war machines may give visions of T-800 from the Terminator Series, Logan says the goal of bringing AI to the battlefield is about saving lives.

The success of Nova is you could push a button and go explore that building, and Nova would go fly into that building, and it would go into a room, spin around 360 degrees, perceive the environment, and make decisions based on what to do and then continue to explore, Shield AI Director of Engineering Willie Logan told Decrypt. The whole goal of that was to provide [soldiers] on the ground insights into what was in the building before they had to walk in themselves.

Shield AI calls its AI software the hivemind. As Logan explained, the difference between an AI-powered UAV and one guided by humans is that instead of a human telling the UAV how to fly and wait for the operator to identify a target, the AI is programmed to look for the target and then monitor the object once it's discovered.

In addition to adding AI brains to drones, Shield AI partnered with defense contractor Kratos Defense to add an AI pilot to its unmanned XQ-58A fighter jetthe Valkyrie. In October, Shield AI announced the raise of $200 million in investments, giving the company a $2.7 billion valuation.

The U.S. military has invested heavily in leveraging AI, including generative AI, to conduct virtual military operations based on military documents fed into the AI model.

In August, the Department of Defense Deputy Secretary of Defense Kathleen Hicks unveiled the Pentagons Replicator initiative that aims to "field attritable autonomous systems at scale of multiple thousands, in multiple domains, within the next 18 to 24 months."

Others developing battlefield AI include European AI defense developer Helsing, which announced raising $223 million in Series B funding in September, including from Swedish airplane and car manufacturer Saab, creator of the Gripen fighter jet.

Logan said that while the idea of killer robots may be good for a Hollywood blockbuster, AI is about keeping humans out of harm's way while keeping humans in the loop.

I really highlight the shield part of Shield AI, Logan said. By giving the United States this capability, [Shield AI] is providing a deterrence. Logan cautioned that even if the United States said it wont develop AI tools for war, that does not mean other countries wont.

I think if we can be in the forefront of it and design it in a way that we think is the right way for the world to use this, Logan said. We can help deter bad actors from doing it the wrong way.

Edited by Ryan Ozawa.

Link:

Role of AI on the Battlefield Debated as Putin Stakes New Policy Position - Decrypt

Read More..