Page 753«..1020..752753754755..760770..»

5 Data Structures That Every Data Scientist Should Learn – Analytics Insight

5 most common and important data structures that every data scientist should learn and master

Data structures are the building blocks of data science. They are the ways of organizing and storing data in a computer so that it can be accessed and manipulated efficiently. Data structures can affect the performance, complexity, and readability of your code. Therefore, it is important to learn the most common and useful data structures for data science. In this article, we will introduce you to 5 data structures that every data scientist should learn and how they can help you solve various data problems.

1. Stacks- Stacks are data structures that follows the Last In First Out (LIFO) principle. Elements are added and removed from the top of the stack. Stacks are efficient for implementing operations such as function calls and backtracking.

2. Queues- Queues are data structures that follows the First In First Out (FIFO) principle. Elements are added to the back of the queue and removed from the front of the queue. Queues are efficient for implementing operations such as job scheduling and message processing.

3. Trees- Trees are hierarchical data structures that consists of a set of nodes, where each node can have one or more children nodes. Trees are efficient for storing and searching data that has a hierarchical relationship, such as a file system or a directory of employees.

4. Heap- Heap is a data structure that maintains a sorted order of elements. Heaps are efficient for implementing priority queues and sorting algorithms.

5. Hash tables- Hash tables are data structures that maps keys to values. Hash tables are efficient for finding the value associated with a given key.

Here is the original post:

5 Data Structures That Every Data Scientist Should Learn - Analytics Insight

Read More..

Unveiling real-time economic insights with search big data – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

by KeAi Communications Co.

close

Economic indicators released by the government are pivotal in shaping decisions across both the public and private sectors. However, a significant limitation of these indicators lies in their timeliness, as they rely on macroeconomic factors like inventory turnover and iron production. For instance, in the case of Japan Cabinet Office's Indexes of Business Conditions, the indices are typically released with a two-month delay.

To overcome the drawbacks of conventional macro-variable-driven techniques, a team of Japanese researchers developed a big data-driven method capable of providing accurate nowcasts for macroeconomic indicators. Importantly, this approach eliminates the need for aggregating semi-macroeconomic data and relies solely non-prescribed search engine query data (Search Big Data) obtained from a prominent search engine used by more than 60% of the nation's internet users.

"Our new model demonstrated the ability to forecast key Japanese economic indicators in real time (= nowcast), even amid the challenges posed by pandemic-related disruptions," said co-corresponding author of the study, Kazuto Ataka. "By leveraging search big data, the model identifies highly correlated queries and performs multiple regression analysis to provide timely and accurate economic insights."

Remarkably, the model showed adaptability and resilience even in the face of rapid economic shifts and unpredictable scenarios. Furthermore, in-depth analysis has revealed that economic activities are influenced not only by economic factors but also by fundamental human desires, including libido and desire for laughter. This underscores the complex interplay between human interests and economic developments.

close

"Our findings offer a nuanced perspective for understanding real-time economic trends. The model's outstanding performance in nowcasting during the pandemic represents a significant advancement over current methodologies, emphasizing the potential of incorporating various real-time data sources to enhance the precision of economic nowcasting," added Ataka.

The study, published in The Journal of Finance and Data Science, stands as a significant advancement in the field of economic nowcasting, opening avenues for more informed and timely decision-making in both the public and private sectors.

More information: Goshi Aoki et al, Data-Driven Estimation of Economic Indicators with Search Big Data in Discontinuous Situation, The Journal of Finance and Data Science (2023). DOI: 10.1016/j.jfds.2023.100106

Provided by KeAi Communications Co.

Follow this link:

Unveiling real-time economic insights with search big data - Phys.org

Read More..

EDIH Data Scientist, School of Computer Science job with … – Times Higher Education

Applications are invited for a Temporary post of a EDIH Data Scientist within UCD School of Computer Science - CeADAR.

Applications are invited for the positions of EDIH Data Scientists in the newly established European Digital Innovation Hub (EDIH) for AI in Ireland as part of the CeADAR Centre - Ireland's Centre for Applied Artificial Intelligence. CeADAR has been successful in the Europe-wide competitive selection process to be the EDIH for AI in Ireland in addition to its continuing national status.

There are 4 key services that this AI EDIH will provide:

The EDIH is seeking experienced individuals who have a demonstrated successful track record in data science in industrial research settings (>2 years) or in academic centres. Individuals in this role are expected to have proven experience applying artificial intelligence, machine learning, computational statistics, and statistics to real world problems. The ideal candidate will have a keen interest in contributing to the development of proof of concepts to allow companies to leverage the benefits of state-of-the-art AI algorithms.

Relevant areas of interest include: deep learning, explainable AI, computer vision, privacy preserving machine learning, reinforcement learning, natural language processing, self and semi-supervised learning, and active learning.

Equality, Diversity and Inclusion

UCD is committed to creating an inclusive environment where diversity is celebrated, and everyone is afforded equality of opportunity. To that end the university adheres to a range of equality, diversity and inclusion policies. We encourage applicants to consult those policies here https://www.ucd.ie/equality/ . We welcome applications from everyone, including those who identify with any of the protected characteristics that are set out in our Equality, Diversity and Inclusion policy.

Salary Range: 53,000 - 59,000 per annum

Appointment on the above range will be dependent upon qualifications and experience.

Closing date: 17:00hrs (local Irish time) on 26th of October 2023.

Applications must be submitted by the closing date and time specified. Any applications which are still in progress at the closing time of 17:00hrs (Local Irish Time) on the specified closing date will be cancelled automatically by the system. UCD are unable to accept late applications.

UCD do not require assistance from Recruitment Agencies. Any CV's submitted by Recruitment Agencies will be returned.

Prior to application, further information (including application procedure) should be obtained from the Work at UCD website: https://www.ucd.ie/workatucd/jobs/

Continue reading here:

EDIH Data Scientist, School of Computer Science job with ... - Times Higher Education

Read More..

Mastering the Art of Data Cleaning in Python – KDnuggets

Data cleaning is a critical part of any data analysis process. It's the step where you remove errors, handle missing data, and make sure that your data is in a format that you can work with. Without a well-cleaned dataset, any subsequent analyses can be skewed or incorrect.

This article introduces you to several key techniques for data cleaning in Python, using powerful libraries like pandas, numpy, seaborn, and matplotlib.

Before diving into the mechanics of data cleaning, let's understand its importance. Real-world data is often messy. It can contain duplicate entries, incorrect or inconsistent data types, missing values, irrelevant features, and outliers. All these factors can lead to misleading conclusions when analyzing data. This makes data cleaning an indispensable part of the data science lifecycle.

Well cover the following data cleaning tasks.

Before getting started, let's import the necessary libraries. We'll be using pandas for data manipulation, and seaborn and matplotlib for visualizations.

Well also import the datetime Python module for manipulating the dates.

First, we'll need to load our data. In this example, we're going to load a CSV file using pandas. We also add the delimiter argument.

Next, it's important to inspect the data to understand its structure, what kind of variables we're working with, and whether there are any missing values. Since the data we imported is not huge, lets have a look at the whole dataset.

Heres how the dataset looks.

You can immediately see there are some missing values. Also, the date formats are inconsistent.

Now, lets take a look at the DataFrame summary using the info() method.

Heres the code output.

We can see that only the column square_feet doesnt have any NULL values, so well somehow have to handle this. Also, the columns advertisement_date, and sale_date are the object data type, even though this should be a date.

The column location is completely empty. Do we need it?

Well show you how to handle these issues. Well start by learning how to delete unnecessary columns.

There are two columns in the dataset that we dont need in our data analysis, so well remove them.

The first column is buyer. We dont need it, as the buyers name doesnt impact the analysis.

Were using the drop() method with the specified column name. We set the axis to 1 to specify that we want to delete a column. Also, the inplace argument is set to True so that we modify the existing DataFrame, and not create a new DataFrame without the removed column.

The second column we want to remove is location. While it might be useful to have this information, this is a completely empty column, so lets just remove it.

We take the same approach as with the first column.

Of course, you can remove these two columns simultaneously.

Both approaches return the following dataframe.

Duplicate data can occur in your dataset for various reasons and can skew your analysis.

Lets detect the duplicates in our dataset. Heres how to do it.

The below code uses the method duplicated() to consider duplicates in the whole dataset. Its default setting is to consider the first occurrence of a value as unique and the subsequent occurrences as duplicates. You can modify this behavior using the keep parameter. For instance, df.duplicated(keep=False) would mark all duplicates as True, including the first occurrence.

Heres the output.

The row with index 3 has been marked as duplicate because row 2 with the same values is its first occurrence.

Now we need to remove duplicates, which we do with the following code.

The drop_duplicates() function considers all columns while identifying duplicates. If you want to consider only certain columns, you can pass them as a list to this function like this: df.drop_duplicates(subset=['column1', 'column2']).

As you can see, the duplicate row has been dropped. However, the indexing stayed the same, with index 3 missing. Well tidy this up by resetting indices.

This task is performed by using the reset_index() function. The drop=True argument is used to discard the original index. If you do not include this argument, the old index will be added as a new column in your DataFrame. By setting drop=True, you are telling pandas to forget the old index and reset it to the default integer index.

For practice, try to remove duplicates from this Microsoft dataset.

Sometimes, data types might be incorrectly set. For example, a date column might be interpreted as strings. You need to convert these to their appropriate types.

In our dataset, well do that for the columns advertisement_date and sale_date, as they are shown as the object data type. Also, the date dates are formatted differently across the rows. We need to make it consistent, along with converting it to date.

The easiest way is to use the to_datetime() method. Again, you can do that column by column, as shown below.

When doing that, we set the dayfirst argument to True because some dates start with the day first.

You can also convert both columns at the same time by using the apply() method with to_datetime().

Both approaches give you the same result.

Now the dates are in a consistent format. We see that not all data has been converted. Theres one NaT value in advertisement_date and two in sale_date. This means the date is missing.

Lets check if the columns are converted to dates by using the info() method.

As you can see, both columns are not in datetime64[ns] format.

Now, try to convert the data from TEXT to NUMERIC in this Airbnb dataset.

Real-world datasets often have missing values. Handling missing data is vital, as certain algorithms cannot handle such values.

Our example also has some missing values, so lets take a look at the two most usual approaches to handling missing data.

If the number of rows with missing data is insignificant compared to the total number of observations, you might consider deleting these rows.

In our example, the last row has no values except the square feet and advertisement date. We cant use such data, so lets remove this row.

Heres the code where we indicate the rows index.

The DataFrame now looks like this.

The last row has been deleted, and our DataFrame now looks better. However, there are still some missing data which well handle using another approach.

If you have significant missing data, a better strategy than deleting could be imputation. This process involves filling in missing values based on other data. For numerical data, common imputation methods involve using a measure of central tendency (mean, median, mode).

In our already changed DataFrame, we have NaT (Not a Time) values in the columns advertisement_date and sale_date. Well impute these missing values using the mean() method.

The code uses the fillna() method to find and fill the null values with the mean value.

You can also do the same thing in one line of code. We use the apply() to apply the function defined using lambda. Same as above, this function uses the fillna() and mean() methods to fill in the missing values.

The output in both cases looks like this.

Our sale_date column now has times which we dont need. Lets remove them.

Well use the strftime() method, which converts the dates to their string representation and a specific format.

Original post:

Mastering the Art of Data Cleaning in Python - KDnuggets

Read More..

"Missing Law of Nature" Proposes How Stars and Minerals Evolve … – Technology Networks

Register for free to listen to this article

Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

An interdisciplinary study, drawing on expertise from fields including philosophy of science, astrobiology, data science, mineralogy and theoretical physics, has identified a previously overlooked aspect of Darwins theory of evolution. The research extends the theory beyond the traditional confines of biological life, and proposes a universal law applicable to an array of systems such as planetary bodies, stars, minerals and even atoms. The paper unveils what the authors term a missing law of nature that encapsulates an inherent principle shaping the evolution of complex natural systems.

The study was published in the Proceedings of the National Academy of Sciences.

Subscribe to Technology Networks daily newsletter, delivering breaking science news straight to your inbox every day.

The new work details a Law of Increasing Functional Information the tendency for systems composed of a mix of components to evolve towards increased complexity, diversity and patterning. The law is applicable to any system characterized by a multitude of configurations, living or non-living, where natural processes engender a plethora of arrangements, yet only a select few persist through a process termed selection for function.

Co-authorJonathan Lunine, the David C. Duncan Professor in the Physical Sciences and chair of astronomy in the College of Arts and Sciences at Cornell University, said that the paper was a true collaboration between scientists and philosophers to address one of the most profound mysteries of the cosmos: why do complex systems, including life, evolve toward greater functional information over time?"

The additional theory applies to systems, like cells or molecules, which are composed of parts that can be rearranged repeatedly by natural processes. While these phenomena can produce endless variation in structure, only a handful of these configurations tend to endure the law terms this selection for function. Darwins law looked at a purely biological form of function survival and reproduction. The new study suggests that this view can be widened to include other types of function. The third, termed novelty, embodies the propensity of evolving systems to venture into unprecedented configurations, occasionally culminating in novel characteristics.

The authors also draw parallels between biological evolution and the evolution of stars and minerals. Primordial minerals, they suggest, represented particularly stable atomic arrangements that then laid the groundwork for subsequent mineral generations and, subsequently, the emergence of life. The example of star structures shows how the tendency towards function can build complex systems the earliest stars, which were created just after the Big Bang, were composed of only two elements: hydrogen and helium. These were then built on to create the more than 100 elements that make up our periodic table today.

If increasing functionality of evolving physical and chemical systems is driven by a natural law, we might expect life to be a common outcome of planetary evolution, concluded Lunine.

Reference: Wong ML, Cleland CE, Arend D, et al. On the roles of function and selection in evolving systems. PNAS. 2023;120(43):e2310223120. doi:10.1073/pnas.2310223120

This article is a rework of a press release issued by Cornell University. Material has been edited for length and content.

See the rest here:

"Missing Law of Nature" Proposes How Stars and Minerals Evolve ... - Technology Networks

Read More..

Girls4Tech STEM program: Closing the gender gap in tech – Mastercard

In 2015 I met Eva M. when the 11-year-old attended our first Girls4Tech program expansion to Gurugram, India.

Three years ago, she interned with us in our Toronto office, and today shes a programmer analyst with Scotiabank. She credits her love of cybersecurity and computer programming to the hands-on, real-world activities she enjoyed with Girls4Tech.

I had the best time in that workshop, Eva recently told me. There was so much to learn. What made Girls4Tech different than any other workshop is that we did activities although on a smaller scale that would actually happen at Mastercard.

Nearly 10 years ago, we created Girls4Tech, our signature STEM education program, to showcase Mastercards technology and to help girls see that it takes all kinds of skills including ones they already possess, like curiosity and initiative to pursue a STEM career.

At the time, the number of girls pursuing STEM careers was at an all-time low not just in the U.S. but around the globe. In 2017, one in five boys said they would pursue STEM, while only one in 20 girls were interested in seeking those same degrees, according to the World Economic Forum.

Our goal was to create a program that would focus on girls and engage our employees as role models and mentors, highlighting their payments technology backgrounds. We believed this corporate-community partnership could help level the playing field and reduce the inequities between boys and girls pursuing STEM careers. Because we know this: When there are multiple voices with myriad experiences at the table, we will create better technology and better products and services for our customers.

Since the early 2000s, Im pleased to say there has been tremendous advocacy for girls in STEM, not just by us but by governments, major corporations and many nonprofits. Youd think with all this focus that the numbers would have changed dramatically. But according to Deloitte Global, the number of women in large technology companies has increased only 2.6% since 2019, and women represent just 33% of the population in tech roles. In the U.S., women make up only 28% of the STEM workforce, according to the American Association of University Women, and gender gaps are particularly high in some of the fastest-growing and highest-paid jobs of the future.

"We know this: When there are multiple voices, with myriad experiences at the table, we will create better technology and better products and services for our customers."

Susan Warner

So what does that tell us? There is more to be done. Capturing girls interest in STEM at age 8 or 10 is one thing; keeping that interest is another. STEM role models and mentoring programs are integral to fostering that interest. Thats why we will debut a new Girls4Tech mentoring and scholarship program in 2024.

Constant learning is also key. Parents, teachers and girls should check out upskilling programs like the ones Microsoft, Google and IBM have created. Stay on top of the trends did you know women make up only 25% of the cybersecurity workforce, a field that already suffers from an enormous shortage of professionals? Thats a STEM field just waiting for women to apply. And finally, when women join the STEM workforce, we need to retain them, so companies need to take a hard look at who is leaving these roles and why.

As we roll up our sleeves to get ready for the next 10 years, lets take a moment to celebrate Girls4Techs success. To date, weve reached 5.7 million girls two years earlier thanthe goal we announced in 2020, and according to research conducted by Euromonitor we are now the worlds largest STEM program designed for young girls.

Weve translated our program into 23 languages, and more than 7,000 employees have volunteered at in-person and digital events in 63 countries. Last week, for International Day of the Girl, we hosted a follow the sun event in which we welcomed girls at 15 events in eight countries.

Since the launch of our original Girls4Tech program in 2014, weve expanded the curriculum to include Girls4Tech Cyber and AI, Girls4Tech 2.0, Girls4Tech & Sports and Girls4Tech & Code, a 20-week coding and mentoring program. In August we launched our first Girls4Tech Python Bootcamp for underrepresented college women in tech. And while Girls4Tech was not designed to be a pipeline program at Mastercard, we are very pleased to announce the first full-time hiring of a G4T participant, Zainab Ibrahim, an associate product specialist in Cyber & Intelligence Solutions.

To extend our curriculum reach over the years, Girls4Tech has partnered with education organizations including Scholastic, We Are Teachers, American Indian Foundation and Teach for Ukraine. In 2020 we announced a partnership with Discovery Education to expand Girls4Tech by bringing cyber and AI careers to life for students in the U.S. and Canada. And were expanding our partnership and our work to include data science, AI and blockchain for 2024. As we look to support girls all over the world,Girls4Tech.comalso offers free STEM activities and resources in 10 languages for teachers and parents to encourage those interested in fun STEM activities.

Yes, theres more work to be done to create an equitable workforce in technology. But its women like Eva and Zainab (and Beatrice, Nahfila, Zoya, Rina I could go on) who keep us focused. Because we also know this: Every act matters, and together we can make a difference and change the equation.

Originally posted here:

Girls4Tech STEM program: Closing the gender gap in tech - Mastercard

Read More..

IIT Madras partners with five startups for initiatives in emerging technologies – IndiaTimes

MUMBAI: IIT Madras Pravartak Technologies Foundation is partnering with start-ups for various strategic initiatives in emerging technologies. The key aspects of this collaboration include: industry-oriented skilling in niche technologies by start-ups and project execution in niche areas such as AI, ML and Data ScienceThe MoU was signed recently between IITM Pravartak and five startups - Crion Versity, Dataswitch, Neekan Consulting, Rudram Dynamics and Skill Angels. Those present on the occasion include Prof V Kamakoti, Director, IIT Madras, Prof. Mangala Sunder Krishnan, Professor Emeritus, IIT Madras, Dr MJ Shankar Raman, CEO, IIT Madras Pravartak Technologies Foundation and Mr. Balamurali Shankar, General Manager, Digital Skills Academy, IIT Madras. IITM Pravartak is funded by the Department of Science and Technology, Government of India, under its National Mission on Interdisciplinary Cyber-Physical Systems, and hosted as a Technology Innovation Hub (TIH) by IIT Madras.Highlighting the importance of this initiative, Prof V Kamakoti, director, IIT Madras, said on Tuesday, Start-ups must become leading employers and look at IIT Madras for their talent requirements. Start-ups in skilling sector should intervene early with students and impart cognitive ability, foundational maths and science skills for their success in higher education.IITM Pravartak Technologies is a Section 8 Company housing the Technology Innovation Hub on Sensors, Networking, Actuators and Control Systems. It is funded by Department of Science and Technology, Government of India, under its National Mission on Interdisciplinary Cyber-Physical Systems and hosted by IIT MadrasSpeaking about this collaboration, MJ Shankar Raman, CEO, IIT Madras Pravartak Technologies Foundation, said, We will work with these start-ups on niche areas like Drone pilot training, Data analytics, AI/ML and Generative AI. Our clients come to us for insights on their complex and sensitive unstructured data. We leverage startups like DataSwitch for such requirements MJ Shankar Raman added, One of our partners, Neekan Consulting, is a Technology, Process and Marketing consulting company enabling SMBs and start-ups in all industry domains. They work with us on Product and Program management apart from skilling freshers and make them job ready. Similarly, SkillAngels (based out of IITM research park) uses gamification, animation and adaptive learning strategies for Cognitive assessments and upskilling.The important outcomes expected from this collaboration include: newer ways of understanding cutting edge technologies through contents and platform belonging to start-ups and developing point solutions for nice problem areas in AI, ML and data science.Further, Balamurali Shankar, General Manager, Digital Skills Academy of IITM Pravartak, said, These start-ups have a combination of academic and industry expertise and thereby giving a good learning experience to the students. One of our partners, Crion Versity, founded by IIT Madras Alumni and come with a rich experience of running a digital twin organization Crion Technologies at IITM Research Park. Their flagship career experience programs provide engaging short form learnings on job skills in areas such as Data Analytics.Balamurali Shankar also mentioned that Rudram Dynamics offers specialized in programs such as drone pilot training, B2G (Business to Government) analytics programs as well as Cyber Law. With all these start-ups coming together, students and industry professionals have a variety of choices to upskill in their respective domains.

Read this article:

IIT Madras partners with five startups for initiatives in emerging technologies - IndiaTimes

Read More..

DeepMind Wants to Use AI to Solve the Climate Crisis – Wired.co.uk

Its a perennial question at WIRED: Tech got us into this mess, can it get us out? Thats particularly true when it comes to climate change. As the weather becomes more extreme and unpredictable, there are hopes that artificial intelligencethat other existential threatmight be part of the solution.

DeepMind, the Google-owned artificial intelligence lab, has been using its AI expertise to tackle the climate change problem in three different ways, as Sims Witherspoon, DeepMinds climate action lead, explained in an interview ahead of her talk at WIRED Impact in London on November 21. This conversation has been edited for clarity and length.

WIRED: How can AI help us tackle climate change?

Sims Witherspoon: There are lots of ways we can slice the answer. AI can help us in mitigation. It can help us in adaptation. It can help us with addressing loss and damage. It can help us in biodiversity and ecology and much more. But I think one of the ways that makes it more tangible for most people is to talk about it through the lens of AIs strengths.

I think of it in three parts: First and foremost, AI can help us understand climate change and the problems that we face related to climate change through better models for prediction and monitoring. One example is our work on precipitation nowcastingso, forecasting rain a few hours aheadand our models were voted more useful and more accurate than other methods by Met Office forecasters, which is great.

But its also just the start because you can then build to predict much more complex phenomena. So AI can be a really significant tool in helping us understand climate change as a problem.

Whats the second thing?

The second bucket that I like to think about is the fact that AI can help us optimize current systems and existing infrastructure. Its not enough to start building new green technology for a more sustainable tomorrow, life needs to go onwe already have many systems that we rely on today, and we cant just burn them all down and start from scratch. We need to be able to optimize those existing systems and infrastructure, and AI is one of the tools that we can use to do this.

Continue reading here:
DeepMind Wants to Use AI to Solve the Climate Crisis - Wired.co.uk

Read More..

DeepMind Wants to Use AI to Solve the Climate Crisis – WIRED

Its a perennial question at WIRED: Tech got us into this mess, can it get us out? Thats particularly true when it comes to climate change. As the weather becomes more extreme and unpredictable, there are hopes that artificial intelligencethat other existential threatmight be part of the solution.

DeepMind, the Google-owned artificial intelligence lab, has been using its AI expertise to tackle the climate change problem in three different ways, as Sims Witherspoon, DeepMinds climate action lead, explained in an interview ahead of her talk at WIRED Impact in London on November 21. This conversation has been edited for clarity and length.

WIRED: How can AI help us tackle climate change?

Sims Witherspoon: There are lots of ways we can slice the answer. AI can help us in mitigation. It can help us in adaptation. It can help us with addressing loss and damage. It can help us in biodiversity and ecology and much more. But I think one of the ways that makes it more tangible for most people is to talk about it through the lens of AIs strengths.

I think of it in three parts: First and foremost, AI can help us understand climate change and the problems that we face related to climate change through better models for prediction and monitoring. One example is our work on precipitation nowcastingso, forecasting rain a few hours aheadand our models were voted more useful and more accurate than other methods by Met Office forecasters, which is great.

But its also just the start because you can then build to predict much more complex phenomena. So AI can be a really significant tool in helping us understand climate change as a problem.

Whats the second thing?

The second bucket that I like to think about is the fact that AI can help us optimize current systems and existing infrastructure. Its not enough to start building new green technology for a more sustainable tomorrow, life needs to go onwe already have many systems that we rely on today, and we cant just burn them all down and start from scratch. We need to be able to optimize those existing systems and infrastructure, and AI is one of the tools that we can use to do this.

Read the original post:
DeepMind Wants to Use AI to Solve the Climate Crisis - WIRED

Read More..

Google has sent internet into ‘spiral of decline’, claims DeepMind co … – The Telegraph

Google has plunged the internet into a spiral of decline, the co-founder of the companys artificial intelligence (AI) lab has claimed.

Mustafa Suleyman, the British entrepreneur who co-founded DeepMind, said: The business model that Google had broke the internet.

He said search results had become plagued with clickbait to keep people addicted and absorbed on the page as long as possible.

Information online is buried at the bottom of a lot of verbiage and guff, Mr Suleyman argued, so websites can sell more adverts, fuelled by Googles technology.

Mr Suleyman was one of three people who set up pioneering AI lab DeepMind in London 2010. The company was bought by Google for 400m and it has become the cornerstone of the search giants AI operations.

Mr Suleyman, 39, quit Google 18 months ago and has since set up a rival venture, Inflection AI. The company is developing a conversational chatbot, similar to ChatGPT, amid a race by AI companies to usurp Googles dominance of the web.

The entrepreneur has developed a chatbot called Pi, which he says can act as a kind of AI confidante or coach. He has raised more than $1.5bn for the new technology.

The criticism of his former employer came as Mr Suleyman told the Telegraph about plans for a new international body to monitor AI threats.

Mr Suleyman, along with billionaire former Google chief executive Eric Schmidt, plan to present proposals for an International Panel on AI Safety at Prime Minister Rishi Sunaks global summit on the technology next month.

The DeepMind co-founder said the panel could be modelled on the IPCC the Intergovernmental Panel on Climate Change to establish the scientific consensus around the current capabilities of AI.

Mr Suleyman said the IPCC, which was first set up in 1988, was a good inspiration for establishing a rigorous body for making predictions about AI risks. Other backers of the plan include Reid Hoffman, the billionaire LinkedIn founder, and Florentino Cullar, president of the Carnegie think tank.

The AI panel would provide governments with regular assessments on the level of danger posed by the technology.

The UKs AI Safety Summit is due to take place at Bletchley Park and is expected to gather world leaders and tech entrepreneurs to address the challenges of frontier AI that might cause significant harm, including the loss of life.

The two-day summit on Nov 1 and 2 is expected to be attended by top lobbyists from the likes of Meta and Google. Kamala Harris, the US vice president, is expected to attend, while a Chinese delegation has been invited.

The leaders will try to find common ground on tackling AI risks. Officials are also understood to be considering setting up an international institute for AI safety.

Excerpt from:
Google has sent internet into 'spiral of decline', claims DeepMind co ... - The Telegraph

Read More..