Page 1,294«..1020..1,2931,2941,2951,296..1,3001,310..»

Super Simple Way to Build Bitcoin Dashboard with ChatGPT – DataDrivenInvestor

Leveraging ChatGPT and Python for Effective Data Science with Bitcoing: Develop Dashboard in super fast!

There is no doubt that, one of the most important phase of a Data Science project is Data Visualization.

Creating impressive and live visuals, will give you and your project a head start.

Of course, you can create a dashboard by coding from A to Z or with smart tools like Tableau, Powerbi or Google Data Studio. Yet I am lazy today so I do not want to do that much manual laboring.

By the way if you want to use ChatGPT to create impressive visuals, do not forget to check this prompts.

So once again, lets take advantage from ChatGPT and especially it will be more attractive for you, because it includes Bitcoin too!

I dont want to spend much time, explaining to you what Data Visualization, Plotly or dash is.

You can google them, or I hope you are familiar them, but it is not necessary now.

Because we will go straight into the coding exercise.

Lets talk with ChatGPT, our loyal companion.

But you want to see dashboard and the code right away, am I right?

Lets see.

Here you can see 10.072 different coins data.You can set the time, I set it 2100 days, you will see the code below.

And also, I added 7 different time zone below the coin section

Also lets see metrics analysis tab.

Here you can see 24 hour change of market cap, which might be good indicator.

The limit really are endless, you can use;

I am not a bitcoin or financial expert, so you can find different graphs and metrics you want to add and add it to the graph by just asking ChatGPT to update its code.(Full code is at the end of the article.)

I suggest you to run first section of the code in another environment ( Jupyter Notebook might be good) to check whether the data types are correct. Because the code is too long, so when you want from ChatGPT to update the code, it might take much longer.

And also, the data types might changed because ChatGPT was updated in 2021, so it might give you outdated code, so it will good to check in somewhere else(then PyCharm), if you had an error about the datatype.

# Get the initial list of coinsresponse = requests.get('https://api.coingecko.com/api/v3/coins/list')coins_list = json.loads(response.text)coins = [coin['id'] for coin in coins_list if isinstance(coin, dict) and 'id' in coin]

# Set up Dash appapp = dash.Dash(__name__)

app.layout = html.Div([html.H1("Cryptocurrency Live Dashboard Empowered by ChatGPT", style={'text-align': 'center'}),dcc.Tabs(id="tabs", value='tab-price', children=[dcc.Tab(label='Price Analysis', value='tab-price', children=[html.Div([dcc.Dropdown(id="slct_coin",options=[{"label": coin, "value": coin} for coin in coins],multi=False,value=coins[0], # Select the first coin in the list by defaultstyle={'width': "40%"})], style={'width': '50%', 'margin': 'auto', 'padding': '10px'}),html.Div([html.Button("1M", id='btn-1m', n_clicks=0),html.Button("2M", id='btn-2m', n_clicks=0),html.Button("3M", id='btn-3m', n_clicks=0),html.Button("6M", id='btn-6m', n_clicks=0),html.Button("1Y", id='btn-1y', n_clicks=0),html.Button("2Y", id='btn-2y', n_clicks=0),html.Button("All", id='btn-all', n_clicks=0),], style={'width': '50%', 'margin': 'auto', 'padding': '10px'}),html.Div(id='output_container', children=[], style={'text-align': 'center'}),dcc.Graph(id='coin_price_graph', style={'height': '500px'})]),dcc.Tab(label='Metrics Analysis', value='tab-metrics', children=[html.Div([html.H2('Metrics Analysis', style={'text-align': 'center'}),html.Table(id='metrics_table', children=[html.Thead(html.Tr([html.Th('Metric'),html.Th('Value')])),html.Tbody([html.Tr([html.Td('Market Cap'),html.Td(id='metric-market-cap')]),html.Tr([html.Td('Volume'),html.Td(id='metric-volume')]),html.Tr([html.Td('Price'),html.Td(id='metric-price')]),html.Tr([html.Td('24h Change'),html.Td(id='metric-24h-change')]),])])], style={'width': '50%', 'margin': 'auto', 'padding': '10px'})])]),dcc.Interval(id='interval-component',interval=60 * 60 * 1000, # in milliseconds (update every hour)n_intervals=0)])

@app.callback([Output(component_id='output_container', component_property='children'),Output(component_id='coin_price_graph', component_property='figure'),Output(component_id='metrics_table', component_property='children')],[Input(component_id='slct_coin', component_property='value'),Input('btn-1m', 'n_clicks'),Input('btn-2m', 'n_clicks'),Input('btn-3m', 'n_clicks'),Input('btn-6m', 'n_clicks'),Input('btn-1y', 'n_clicks'),Input('btn-2y', 'n_clicks'),Input('btn-all', 'n_clicks'),Input('interval-component', 'n_intervals')])def update_graph(slct_coin, btn_1m, btn_2m, btn_3m, btn_6m, btn_1y, btn_2y, btn_all, n):changed_id = [p['prop_id'] for p in dash.callback_context.triggered][0]

if 'btn-1m' in changed_id:days = 30elif 'btn-2m' in changed_id:days = 60elif 'btn-3m' in changed_id:days = 90elif 'btn-6m' in changed_id:days = 180elif 'btn-1y' in changed_id:days = 365elif 'btn-2y' in changed_id:days = 730elif 'btn-all' in changed_id:days = 2100else:days = 2100 # Default time period (1 month)

if days is not None:response = requests.get(f'https://api.coingecko.com/api/v3/coins/{slct_coin}/market_chart?vs_currency=usd&days={days}&interval=daily')data = json.loads(response.text)

df = pd.DataFrame()df['times'] = pd.to_datetime([x[0] for x in data['prices']], unit='ms')df['prices'] = [x[1] for x in data['prices']]

fig = go.Figure()fig.add_trace(go.Scatter(x=df['times'], y=df['prices'], mode='lines', name='Price'))

fig.update_layout(title={'text': "Price of " + slct_coin.capitalize() + " in USD",'y': 0.95,'x': 0.5,'xanchor': 'center','yanchor': 'top'},xaxis_title="Time",yaxis_title="Price (USD)",legend_title="Variables",paper_bgcolor='rgba(240, 240, 240, 0.6)',plot_bgcolor='rgba(240, 240, 240, 0.6)',font=dict(color='black'),showlegend=False,yaxis=dict(gridcolor='lightgray'),xaxis=dict(gridcolor='lightgray'))

# Metrics analysisresponse_metrics = requests.get(f'https://api.coingecko.com/api/v3/coins/{slct_coin}')metrics_data = json.loads(response_metrics.text)market_cap = metrics_data['market_data']['market_cap']['usd']volume = metrics_data['market_data']['total_volume']['usd']price = metrics_data['market_data']['current_price']['usd']change_24h = metrics_data['market_data']['price_change_percentage_24h']

# Determine the arrow symbol and color based on the change_24h valueif change_24h < 0:arrow_symbol = ''arrow_color = 'red'else:arrow_symbol = ''arrow_color = 'green'

# Create the metrics tablemetrics_table = html.Table([html.Thead(html.Tr([html.Th('Metric'),html.Th('Value')])),html.Tbody([html.Tr([html.Td('Market Cap'),html.Td('${:,.2f}'.format(market_cap))]),html.Tr([html.Td('Volume'),html.Td('${:,.2f}'.format(volume))]),html.Tr([html.Td('Price'),html.Td('${:,.2f}'.format(price))]),html.Tr([html.Td('24h Change'),html.Td([html.Span(arrow_symbol + ' ', style={'color': arrow_color, 'font-weight': 'bold'}),html.Span('{:.2f}%'.format(change_24h), style={'display': 'inline-block'})])])])], style={'width': '100%', 'font-family': 'Arial, sans-serif', 'font-size': '16px', 'text-align': 'center'})

else:fig = go.Figure()metrics_table = html.Table([])

container = "The coin chosen by the user was: {}".format(slct_coin)

return container, fig, metrics_table

if __name__ == '__main__':app.run_server(debug=True, port=8052)

If youve made it this far, thank you! In case youre not yet a Medium member and want to expand your knowledge through reading, heres my referral link.

I continually update and add new Cheat Sheets and Source Codes for your benefit. Recently, I crafted a ChatGPT cheat sheet, and honestly, I cant recall a day when I havent used ChatGPT since its release.

Also, here is my E-Book, explains, how Machine Learning can be learned by using ChatGPT.

Feel free to select one of the Cheat Sheets or projects for me to send you by completing the forms below:

Here is my NumPy cheat sheet.

Here is the source code of the How to be a Billionaire data project.

Here is the source code of the Classification Task with 6 Different Algorithms using Python data project.

Here is the source code of the Decision Tree in Energy Efficiency Analysis data project.

Here is the source code of the DataDrivenInvestor 2022 Articles Analysis data project.

Machine learning is the last invention that humanity will ever need to make. Nick Bostrom

View original post here:

Super Simple Way to Build Bitcoin Dashboard with ChatGPT - DataDrivenInvestor

Read More..

Meet the U’s newest research instrument: The Zeiss Xradia Versa … – Vice President for Research

By Xoel Cardenas, Sr. Communications Specialist, Office of the Vice President of Research

Its not every day or even every year that the University of Utah gets a research instrument that is the envy of many universities and institutions. But recently, the U welcomed an X-ray microscope that will promote research innovations, discoveries, and collaborations.

In January, the Utah Nanofab announced the arrival of a new Zeiss Xradia Versa 620 X-ray microscope, which was installed a few weeks later. Its an X-ray microscope that will provide 3D, sub-micron imaging resolution of hard, soft and biological materials, according to the departments announcement. Materials can be studied under mechanical loads (up to 5 kN) and/or temperature conditions (-20 to 160 C).

The Versa 620 is a state-of-the-art instrument that will be unique in the Intermountain West region, Utah Nanofab added. A wide range of transformative studies in various fields will be enabled, they added, including aerospace materials, semiconductor devices, additive manufactured materials, geology, biology, medicine and more.

We spoke to Dr. Jacob Hochhalter, PI on the NSF proposal that funded the instrument acquisition and Assistant Professor in the Department of Mechanical Engineering at the U. He told us more about the Versa 620, what it can do, and how it will move forward research and discoveries at the U.

this instrument will help the U build collaborations around the country and increase our impact.

Q: Tell us about what the Versa 620 is and what it does.

Hochhalter: First, its an X-ray microscope. Starting from those two words, it should paint two pictures in your mind. The first is the commonly known X-ray image, which illustrates differences in material densities as varying contrast (light vs. dark), like differentiating a bone from its surrounding tissue. Second, the microscope part, means that researchers can make observations at small scales (think very small fractions of the diameter of a human hair). Consequently, beyond what a patient might conventionally see at the doctor, in the X-ray microscope researchers can also magnify to observe the very small length scales at which many fundamental mechanisms of materials operate. The level of magnification can be changed on-the-fly so scans of larger volumes at lower resolution can be done to detect interesting features, with a subsequent focus with higher magnification (higher resolution) to learn more about those features.

Q: How long did it take from the beginning of the idea of wanting to acquire this machine to successfully being awarded to acquire it?

Hochhalter: Success in these large grants requires persistence and proposals that get people excited. We submitted the proposal four times. In the first two times, the proposal was technically sound but not exciting enough to be competitive. Once we realized this sticking point we focused on building our regional and National collaborations, eventually receiving over 50 support letters from around the country. Once we made those connections, the regional and National impact was made clear across applications in aerospace, structural, biological, and geological materials applications, to name a few. I have been told that this is the first Track 2 (above $1.4M) NSF MRI award that Utah has led. Having learned from our early failures, we plan to capitalize on what we have learned through this process to bring more exciting instruments like this to the U.

Q: When it comes to the possibility of students or faculty discovering new things using this X-ray microscope that our university has, how will this machine help accelerate the step-by-step process of research?

Hochhalter: Prior to this award, faculty the Utah had to travel to one of a handful of places in the U.S., commonly called beamline facilities, which are massive facilities that enable similar acquisition capabilities. However, those resources are heavily utilized, and researchers are required to write proposals for access. If granted, travel to its location for an abbreviated study is required, which inherently restricts the impact of these exciting methods. With the Versa at the U, researchers now have a lab-scale surrogate for beamline resources which enables more widespread, inclusive adoption of exciting experimental studies which help accelerate materials development. An exciting impact is that this accessibility will increase in the quantity of data provided available, which will be leveraged by the researchers to advance a new frontier for data analytics and machine learning applications in materials research.

In other words, more observations not only opens the door for discovery, but the one thing that were really excited about is by being able to acquire more data to start leveraging data science methods and collaborating with, say, folks in the computer science department to bring new methods like machine learning and artificial intelligence to these studies. The other maybe more seemingly intangible, but very important possibility, is that this instrument will help the U build collaborations around the country and increase our impact.

With the Versa at the U, researchers now have a lab-scale surrogate for beamline resources which enables more widespread, inclusive adoption of exciting experimental studies which help accelerate materials development.

Q: What are some of the ideas or projects in mind when it comes to the Versa 620 and how it can help promote research among young students, in particular, help promote STEM education and have students at a younger age be more involved with what this machine can do?

Hochhalter: One of our goals for the next year is to create an inter-high school competition which will mimic the scientific process. So, phase one of the competition would be like a propsal phase during which Utah faculty would pose an open question and students would propose what should be scanned and how the data should be analyzed. Phase two would include the students receiving those data, analyzing it using their own creative process, and describing what they were able to learn. I am also working closely with the STEMCAP group at the U, who help open the exciting world of STEM to youth-in-custody students. This fall, we will be hosting a virtual tour (via Zoom) of the new X-ray microscope to students in that program.

Q: As a researcher, as an educator, just how exciting is it to have all this device, to be able to share it with students and just be a part of this? Its definitely got to be high up there in the list of personal accomplishments to be able to be a part of this, correct?

Hochhalter: You know, using it as a scientific tool is great. It helps us learn new things and develop new products across a broad range of applications. But in the end, the reason why were at the U is because we like to make an impact. Ive been at the U for five years, and before that I was at NASA for ten. Ultimately, I came to the U because I wanted to be closer to the impact on our future generations of scientists and engineers. With that in mind, getting students excited about the future of materials research and providing this new level of insight into material behavior is priceless.

To watch the Versa 620 in action, click here.

More information on the Utah Nanofab can be found here.

More here:

Meet the U's newest research instrument: The Zeiss Xradia Versa ... - Vice President for Research

Read More..

Arjun Verma’s approach to science is equal parts heart and hands-on – UCLA Newsroom

Its hard to say how Bay Area native Arjun Verma first fell in love with science.

One could say that it was inevitable after all, his mother was a physician who transitioned into clinical research, and his father is a software engineer. But he traces the initial spark to lessons he learned as a child while spending time with his friendly neighbors.

One was a retired engineer, and he spent a lot of time with me, digging in the garden for bugs and building model train sets and balsa wood airplanes, Verma said. And that was when I really gained a deep appreciation for working with my hands and understanding how things work.

Today, Verma is a molecular, cell and developmental biology major with a minor in bioinformatics on the cusp of his graduation from UCLA and his entrance into Harvard Medical School this fall. His goal is to become both a scientist and a cardiothoracic surgeon.

Im very interested in surgery and data science, and I hope I can contribute to the melding of the two. Through my volunteering, Ive learned that I love to interact with patients face-to-face and to be a pillar of support for them as they go through difficult times, he said. But I also really enjoy the process of taking the challenges patients face and zooming out to think, What kind of research can be done to solve these issues? Thats something I was really exposed to in the CORELAB.

The Cardiovascular Outcomes Research Laboratories principal investigator is Dr. Peyman Benharash, a UCLA Health cardiothoracic surgeon. For many of the research projects that Verma worked on under Benharash, he used data science and machine learning techniques to identify factors that contributed to postoperative complications and prolonged hospital stays. He also developed methods to 3D-print accurate heart models for surgical education.

I like to do a lot of different things, and my research lab is all computational, so majoring in MCDB was like scratching my itch to learn more about the intricacies of medicine and human biology, Verma said. In class, I enjoyed learning about things like DNA repair, metabolism and cancer stem cells; molecular, cell and developmental biology courses have undoubtedly kept my passion alive for the nuanced concepts Ill definitely encounter in medical school and beyond.

Some might argue that hes already made substantial progress in the field. As a student in UCLAs Undergraduate Research Scholars Program, he delivered one of only 19 podium presentations accepted at the Western Thoracic Surgical Associations Annual Meeting and published scholarly articles in JAMA Cardiology as well as the Annals of Thoracic Surgery, the latter as the lead author. In addition, hes the founder and president of TechConnected, a student organization whose members volunteer free graphic design and web development expertise to further social change.

UCLA has definitely taught me about myself and how to be more resilient. When I was picking where to go to school, everyone said UCLA was too hard for premeds, but I saw it as a challenge, Verma said. I love the energy and the people here, the presence of diverse perspectives. This community is something Ill hold with me forever.

As he experiences that inevitable blend of excitement and fear any soon-to-be college graduate can relate to, Verma remains proud of all hes accomplished in the last four years. His parents are too, although getting them to say it out loud is another matter.

Indian families can be very muted when it comes to praise, Verma said with a laugh, sharing how after he committed to Harvard and updated his LinkedIn profile accordingly, he had a 30-minute phone call with his parents.

We were just talking about random stuff, like my week, but after we finished the call and hung up, I saw that I had a new LinkedIn message, he said. It was from my dad, and he said, Were proud of you.

See original here:

Arjun Verma's approach to science is equal parts heart and hands-on - UCLA Newsroom

Read More..

Professorship of Data Science and Healthcare Improvement job with … – Times Higher Education

The Board of Electors to The Professorship of Data Science and Healthcare Improvement invites applications for this major academic leadership role tenured to the retiring age.

Based in The Healthcare Improvement Studies Institute (THIS Institute) within the Department of Public Health and Primary Care, you will make significant contributions to the intellectual development of the discipline of healthcare improvement studies and lead programmes of research of national and international importance. You will form collaborations with academic and clinical partners and with world-leading centres in health data science at Cambridge and beyond, and secure funding and publish research of internationally excellent quality that results in real-world impact.

You will be a world-class academic with a distinguished track-record in the field of data science applied to healthcare improvement. You will have an outstanding record in research leadership and in teaching, training and capacity-building, and a proven ability to work collaboratively across organisations, disciplines and sectors and to communicate effectively with NHS partners and stakeholders. With an established background in a relevant area such as statistics, epidemiology, machine learning, or health informatics you will understand the challenges of making change in complex socio-technical systems and will have significant expertise in the use of routinely collected healthcare data to support actionable improvement in healthcare.

You will provide leadership for education and training in the Department and the School of Clinical Medicine more broadly. And, as a senior academic leader in the Department, you will demonstrate superb organisational citizenship.

If appointed, you will be an independent University-employed academic, responsible to the Director of THIS Institute and to the Head of the Department of Public Health and Primary Care.

You will be based in Cambridge. A competitive salary will be offered.

How to apply

Further information, including a detailed role description and person specification, and details on how to apply can be downloaded at https://candidates.perrettlaver.com/vacancies/ quoting reference number 6605.

For an informal and confidential discussion about the role, please contact Urvashi Ramphul on +44(0)20 7340 6280 or via email at urvashi.ramphul@perrettlaver.com.

The closing date for applications is Monday 10th July 2023 at 09:00 BST.

The University actively supports equality, diversity and inclusion and encourages applications from all sections of society.

The University has a responsibility to ensure that all employees are eligible to live and work in the UK.

For a conversation in confidence, please contact Urvashi Ramphul on +44(0)20 7340 6280 or via email at urvashi.ramphul@perrettlaver.com.Should you require access to these documents in alternative formats, please contact Esther Elbro on Esther.Elbro@perrettlaver.com.If you have comments that would support us to improve access to documentation, or our application processes more generally, please do not hesitate to contact us via accessibility@perrettlaver.com

Privacy Policy

Protecting your personal data is of the utmost importance to Perrett Laver and we take this responsibility very seriously. Any information obtained by our trading divisions is held and processed in accordance with the relevant data protection legislation. The data you provide us with is securely stored on our computerised database and transferred to our clients for the purposes of presenting you as a candidate and/or considering your suitability for a role you have registered interest in.

As defined under the General Data Protection Regulation (GDPR) Perrett Laver is a Data Controller and a Data Processor, and our legal basis for processing your personal data is Legitimate Interests. You have the right to object to us processing your data in this way. For more information about this, your rights, and our approach to Data Protection and Privacy, please visit our website http://www.perrettlaver.com/information/privacy/.

Read more:

Professorship of Data Science and Healthcare Improvement job with ... - Times Higher Education

Read More..

Here’s how to master machine learning – SiliconRepublic.com

With machine learning skills, you can work in data science, AI and medtech to name a few. Here, we give some pointers on how to get started.

Machine learning is a subset of AI that is used in a lot of real-world scenarios including customer service, recommender algorithms and speech-recognition software.

As machine learning is so widely used it is a great area to get familiar with. A very simple way of explaining machine learning and how it works is to think of it as computers imitating the way humans learn using algorithms and data.

Lets take a look at some of the concepts you should know in machine learning. You may end up honing in on one of these areas down the line after youve learned some of the basics.

Neural network architecture is often also referred to as deep learning. It consists of algorithms that can mimic the way human brains learn to process and recognise relationships between large data sets.

Youll find neural networks used in sectors such as market research and any industry that interacts with large data.

There are three main types of learning in neural networks. These are: supervised learning, unsupervised learning and reinforcement learning. Well take a look at the difference between supervised and unsupervised a little further on in the piece.

This consists of a set of machine learning methods that predict a continuous outcome variable based on the value of one or multiple predictor variables.

Regression analysis can be used for things like predicting the weather or predicting the price of a product or service given its features.

Clustering does what its name says in that its main purpose is to identify patterns in data so it can be grouped.

The tool uses a machine language algorithm to create groups of data with similar characteristics. It can do this much faster than humans can.

Supervised machine learning relies on labelled input and output data, but unsupervised does not. Unsupervised machine learning can process raw and unlabelled data.

Clustering uses unsupervised machine learning because it groups unlabelled data.

As we have identified, machine learning professionals interact with data quite a bit. As well as software engineering knowledge, they should have some data science skills.

This piece by Coursera on machine learning skills recommends that people learn data science languages like SQL, Python, C++, R and Java for stats analysis and data modelling.

That brings us on to maths; you will need a fairly solid grounding in statistics and maths to be able to understand the data science components of machine learning.

Being able to critically think about why youre using certain machine learning techniques is also pretty important, especially if you need to explain your methods and reasons to colleagues with a non-tech background.

Earlier this year, Yahoos Zuoyun Jin gave us some tips for learning, based on his experience as a machine learning research engineer.

If you want to brush up on your Python for machine learning, this guide on SiliconRepublic.com points you in the direction of some handy resources.

In terms of gaining a basic overview of machine learning, you might want to check out some online beginners courses. This Understanding Machine Learning programme from Datacamp says it provides an introduction with no coding involved.

If you are looking for something more advanced, this course by MIT gives learners an introduction to machine learning as well as ways the tech can be used by businesses. Its mainly geared towards applying the techniques in a business context.

Last but not least, Googles Machine Learning Crash Course is a 25-lesson programme that features lectures on the topic from Googlers.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republics digest of essential sci-tech news.

Excerpt from:

Here's how to master machine learning - SiliconRepublic.com

Read More..

Jack Gao: Prepare for profound AI-driven transformations – China.org

On March 14, Dr. Jack Gao, CEO of Smart Cinema and former president of Microsoft China, was left amazed after watching the livestream of GPT-4s press conference. He was stunned by what the chatbot is able to do.

Jack Gao delivers a keynote speech on artificial intelligence during a summit forum before the 14th Chinese Nebula Awards gala in Guanghan, Sichuan province, May 13, 2023. [Photo courtesy of EV/SFM]

"I was so excited and couldn't calm down for a whole week. During that time, Baidu also released its own Ernie Bot, and Alibaba followed with Tongyi Qianwen. There are more AI bots to come, such as the one from Google," Gao told China.org.cn, adding that he later engaged in conversations with insiders from various industries to get a clear understanding of the bigger picture.

Last weekend, he discussed this topic at China's top sci-fi event, the 14th Chinese Nebula Awards, where he also delivered a keynote speech and sought feedback from China's most prominent sci-fi writers, who have frequently envisioned the future and portrayed artificial intelligence (AI) in their novels.

"The era of AI has arrived. I have an unprecedented feeling knowing that it can pass the lawyers' exam with high scores and even possess a common sense that was previously exclusive to humans," Gao said. "When AI becomes another intelligent brain in our lives and has the potential to develop consciousness for the benefit of the entire human race, its intelligence will expand infinitely."

The profound changes will come quickly, according to his vision. AI could directly handle many aspects related to human life, from everything from translation to communication, medical diagnoses, lawsuits, and creative jobs. This could bring greater efficiency and upgrades to current industries, but it also raises concerns.

Some have already recognized the threats, like Hollywood scriptwriters who went on strike in early May due to concerns about AI "generative" text and image tools impacting their jobs and incomes. Tech giants have also laid off numerous employees after embracing AI technologies. Geoffrey Hinton, widely regarded as the "godfather" of AI, departed from Google and raised warnings about the potential dangers of AI chatbots, emphasizing their potential to surpass human intelligence in the near future. Hinton also cautioned against the potential misuse of AI by "bad actors" that could have harmful consequences for society.

"When I was a student 40 years ago, our wildest imaginations couldn't compare to what we have today. Technology has fundamentally transformed our lives," Gao said. The man has an awe-inspiring profile in both the tech and media industries, having served as a top executive at Autodesk Inc., Microsoft, News Corp., and Dalian Wanda Group. He has witnessed numerous significant technological advancements over the decades, from PC computers and the internet to big data, which have brought about great changes to the world.

When Google's AlphaGo AI defeated the world's number one Go player, Ke Jie, people began to recognize the power of AI, although they initially thought its impact was limited to the realm of Go. "But what if there's an 'AlphaGo' in every industry? Gao mused. "What can humans do, and how can they prevail? Imagine a scenario where you have your own 'AlphaGo' while others do not. This is the reality we are facing, and we must take it seriously."

He believes that the digital gap between machines and humans has been bridged so that AI bots can interact with humans through chat interfaces without the need for programmers to write code. He also believes that when large language models reach a sufficient scale, new chemical sparks will ignite, leading to new miracles of some kind. "You have to understand that language is the foundational layer and operating system of human civilization and ecology."

"Based on my experience using and learning from AI bots, I have also noticed an important factor: the quality of answers from chatbots depends on how you ask them. Our way of thinking will shift towards seeking answers because there are countless valuable answers in the world waiting for good questions," he said. He added that people should prepare themselves with optimism to understand, utilize, explore, and harness AI, making it a beneficial and integral part of their lives.

Gao's speech caused a stir at the sci-fi convention. After he finished, many sci-fi writers, including eminent figures like Han Song and He Xi, approached him to discuss further. "They told me that after listening to my speech, they had a more personal understanding of how AI will truly impact our lives and work. The technology is already here, and we have no choice but to actively explore and embrace it, adapting to the changes."

See the article here:
Jack Gao: Prepare for profound AI-driven transformations - China.org

Read More..

Google at I/O 2023: Weve been doing AI since before it was cool – Ars Technica

Enlarge / Google CEO Sundar Pichai explains some of the company's many new AI models.

Google

That Google I/O show sure was something, wasn't it? It was a rip-roaring two hours of nonstop AI talk without a break. Bard, Palm, Duet, Unicorn, Gecko, Gemini, Tailwind, Otterthere were so many cryptic AI code names thrown around it was hard to keep track of what Google was talking about. A glossary really would have helped. The highlight was, of course, the hardware, but even that was talked about as an AI delivery system.

Google is in the midst of a total panic over the rise of OpenAI and its flagship product, ChatGPT, which has excited Wall Street and has the potential to steal some queries people would normally type into Google.com. It's an embarrassing situation for Google, especially for its CEO Sundar Pichai, who has been pitching an "AI first" mantra for about seven years now and doesn't have much to show for it. Google has been trying to get consumers excited about AI for years, but people only seemed to start caring once someone other than Google took a swing at it.

Even more embarrassing is that the rise of ChatGPT was built on Google's technology. The "T" in "ChatGPT" stands for "transformer," a neural network technique Google invented in 2017 and never commercialized. OpenAI took Google's public research, built a product around it, and now uses that product to threaten Google.

In the months before I/O, Pichai issued a "Code Red" warning across the company, saying that ChatGPT was something Google needed to fight, and it even dragged its co-founders, Larry Page and Sergey Brin, out of retirement to help. Years ago, Google panicked over Facebook and mandated that all employees build social features in Google's existing applications. And while that was a widely hated initiative that eventually failed, Google is dusting off that Google+ playbook to fight OpenAI. It's now reportedly mandated that all employees build some kind of AI feature into every Google product.

"Mandatory AI" is certainly what Google I/O felt like. Each section of the presentation had some division of Google give a book report on the New AI Thing they have been working on for the past six months. Google I/O felt more like a presentation for Google's managers rather than a show meant to excite developers and consumers. The AI directive led to ridiculous situations like Android's head of engineering going on stage to talk only about an AI-powered poop emoji wallpaper generator rather than any meaningful OS improvements.

Wall Street investors were apparently one group excited by Google I/O the company's stock jumped 4 percent after the show. Maybe that was the point of all of this.

Would you believe Google Assistant got zero mentions at Google I/O? This show was exclusively about AI, and Google didn't mention its biggest AI product. Pichai's seminal "AI First" blog post from 2016 is about Google Assistant and features an image of Pichai in front of the Google Assistant logo. Google highlighted past AI projects like Gmail's Smart Reply and Smart Compose, Google Photos' magic eraser and AI-powered search, Deepmind's AlphaGo, and Google Lens, but Google Assistant could not manage a single mention. That seemed entirely on purpose.

Heck, Google introduced a product that was a follow-up to the Nest Hub Google Assistant smart displaythe Pixel Tabletand Google Assistant still couldn't get a mention. At one point, the presenter even said the Pixel Tablet had a "voice-activated helper."

Google

Google's avoidance of Google Assistant at I/O seemed like a further deprioritization of what used to be its primary AI product. The Assistant's last major speaker/display product launch was two years ago in March 2021. Since then, Google shipped hardware that dropped Assistant support from Nest Wi-Fi and Fitbit, and it disabled Assistant commands on Waze. It lost a patent case to Sonos and stripped away key speaker functionality, like controlling the volume, from the cast feature. Assistant Driving Mode was shut down in 2022, and one of the Assistant's biggest features, reminders, is getting shut down in favor of Google Tasks Reminders.

The Pixel Tablet sure seemed like it was supposed to be a new Google Assistant device since it looks exactly like all of the other Google Assistant devices, but Google shipped it without a dedicated smart display interface. It seems like it was conceived when the Assistant was a viable product at Google and then shipped as leftover hardware when Assistant had fallen out of favor.

The Google Assistant team has reportedly been asked to stop working on its own product and focus on improving Bard. The Assistant hasn't really ever made money in its seven years; the hardware is all sold at cost, voice recognition servers are expensive to run, and Assistant doesn't have any viable post-sale revenue streams like ads. Anecdotally, it seems like the power for those voice recognition servers is being turned way down, as Assistant commands seem to take several seconds to process lately.

The Google I/O keynote transcript counts 19 uses of the word "responsible" about Google's rollout of AI. Google is trying to draw some kind of distinction between it and OpenAI, which got to the point it's at by being a lot more aggressive in its rollout compared to Google. My favorite example of this was OpenAI's GPT-4 arrival, which came with the surprise announcement that it had been running as a beta on production Bing servers for weeks.

Google's sudden lip service toward responsible AI use seems to run counter to its actions. In 2021 Google's AI division famously pushed out AI ethics co-head Dr. Timnit Gebru for criticizing Google's diversity efforts and trying to publish AI research that didn't cast Google in a positive-enough light. Google then fired its other AI ethics co-head, Margaret Mitchell, for writing an open letter supportive of Gebru and co-authoring the contentious research paper.

In the run-up to the rushed launch of Bard, Google's answer to ChatGPT, a Bloomberg report claims that Google's AI ethics team was"disempowered and demoralized" so Google could get Bard out the door. Employees testing the chatbot said some of the answers they received were wrong and dangerous, but employees bringing up safety concerns were told they were "getting in the way" of Google's "real work." The Bloomberg report says AI ethics reviews are "almost entirely voluntary" at Google.

Google has seemingly already second-guessed its all-AI, all-the-time strategy. A Business Insider report details a post-I/O company meeting where one employee question to Pichai nails my feelings after Google I/O, saying, "Many AI goals across the company focus on promoting AI for its own sake, rather than for some underlying benefit." The employee asks how Google will "provide value with AI rather than chasing it for its own sake."

Pichai reportedly replied that when Googler's current OKRs (objectives and key resultsbasically your goals as an employee) were written, it was during an "inflection point" around AI. Now that I/O is over, Pichai said, "I think one of the things the teams are all doing post-I/O is re-looking. Normally we don't do this, but we are re-looking at the OKRs and adapting it for the rest of the year, and I think you will see some of the deeper goals reflected, and we'll make those changes over the upcoming days and weeks."

So the AI "Code Red" was in January, and now it's May, and Google's priorities are already being reshuffled? That tracks with Google's history.

Visit link:
Google at I/O 2023: Weve been doing AI since before it was cool - Ars Technica

Read More..

Terence Tao Leads White House’s Generative AI Working Group … – Pandaily

On May 13th, Terence Tao, an award winning Australia-born Chinese mathematician, announced that he and physicist Laura Greene will co-chair a working group studying the impacts of generative artificial intelligence technology on the Presidential Council of Advisors on Science and Technology (PCAST). The group will hold a public meeting during the PCAST conference on May 19th, where Demis Hassabis, founder of DeepMind and creator of AlphaGo, as well as Stanford University professor Fei-Fei Li among others will give speeches.

According to Terence Taos blog, the group mainly researches the impact of generative AI technology in scientific and social fields, including large-scale language models based on text such as ChatGPT, image generators like DALL-E 2 and Midjourney, as well as scientific application models for protein design or weather forecasting. It is worth mentioning that Lisa Su, CEO of AMD, and Phil Venables, Chief Information Security Officer of Google Cloud are also members of this working group.

According to an article posted on the official website of the White House, PCAST develops evidence-based recommendations for the President on matters involving science, technology, and innovation policy, as well as on matters involving scientific and technological information that is needed to inform policy affecting the economy, worker empowerment, education, energy, the environment, public health, national and homeland security, racial equity, and other topics.

SEE ALSO: Mathematician Terence Tao Comments on ChatGPT

After the emergence of ChatGPT, top mathematicians like Terence Tao also paid great attention to it and began exploring how artificial intelligence could help them complete their work. In an article titled How will AI change mathematics? Rise of chatbots highlights discussion in the Nature Journal, Andrew Granville, a number theorist at McGill University in Canada, also said that we are studying a very specific question: will machines change mathematics? Mathematician Kevin Buzzard agrees, saying that in fact, now even Fields Medal winners and other very famous mathematicians are interested in this field, which shows that it has become popular in an unprecedented way.

Previously, Terence Tao wrote on the decentralized social network Mastodon, Today was the first day that I could definitively say that #GPT4 has saved me a significant amount of tedious work. In his experimentation, Terence Tao discovered many hidden features of ChatGPT such as searching for formulas, parsing documents with code formatting, rewriting sentences in academic papers and sometimes even semantically searching incomplete math problems to generate hints.

See the article here:
Terence Tao Leads White House's Generative AI Working Group ... - Pandaily

Read More..

Purdue President Chiang to grads: Let Boilermakers lead in … – Purdue University

Purdue President Mung Chiang made these remarks during the universitys Spring Commencement ceremonies May 12-14.

Opening

Today is not just any graduation but the commencement at a special place called Purdue, with a history that is rich and distinct and an accelerating momentum of excellence at scale. There is nothing more exciting than to see thousands of Boilermakers celebrate a milestone in your lives with those who have supported you. And this commencement has a special meaning to me as my first in the new role serving our university.

President Emeritus Mitch Daniels gave 10 commencement speeches, each an original treatise, throughout the Daniels Decade. I was tempted to simply ask generative AI engines to write this one for me. But I thought itd be more fun to say a few thematic words by a human for fellow humans before that becomes unfashionable.

AI at Purdue

Sometime back in the mid-20th century, AI was a hot topic for a while. Now it is again; so hot that no computation is too basic to self-anoint as AI and no challenge seems too grand to be out of its reach. But the more you know how tools such as machine learning work, the less mysterious they become.

For the moment, lets assume that AI will finally be transformational to every industry and to everyone: changing how we live, shaping what we believe in, displacing jobs. And disrupting education.

Well, after IBMs Deep Blue beat the world champion, we still play chess. After calculators, children are still taught how to add numbers. Human beings learn and do things not just as survival skills, but also for fun, or as a training of our mind.

That doesnt mean we dont adapt. Once calculators became prevalent, elementary schools pivoted to translating real-world problems into math formulations rather than training for the speed of adding numbers. Once online search became widely available, colleges taught students how to properly cite online sources.

Some have explored banning AI in education. That would be hard to enforce; its also unhealthy as students need to function in an AI-infused workplace upon graduation. We would rather Purdue evolve teaching AI and teaching with AI.

Thats why Purdue offers multiple major and minor degrees, fellowships and scholarships in AI and in its applications. Some will be offered as affordable online credentials, so please consider coming back to get another Purdue degree and enjoy more final exams!

And thats why Purdue will explore the best way to use AI in serving our students: to streamline processes and enhance efficiency so that individualized experiences can be offered at scale in West Lafayette. Machines free up human time so that we can do less and watch Netflix on a couch, or we can do more and create more with the time saved.

Pausing AI research is even less practical, not the least because AI is not a well-defined, clearly demarcated area in isolation. All universities and companies around the world would have to stop any research that involves math. My Ph.D. co-advisor, Professor Tom Cover, did groundbreaking work in the 1960s on neural networks and statistics, not realizing those would later become useful in what others call AI. We would rather Purdue advance AI research with nuanced appreciation of the pitfalls, limitations and unintended consequences in its deployment.

Thats why Purdue just launched the universitywide Institute of Physical AI. Our faculty are the leaders at the intersection of virtual and physical, where the bytes of AI meet the atoms of what we grow, make and move from agriculture tech to personalized health care. Some of Purdues experts develop AI to check and contain AI through privacy-preserving cybersecurity and fake video detection.

Limitations and Limits

As it stands today, AI is good at following rules, not breaking rules; reinforcing patterns, not creating patterns; mimicking whats given, not imagining beyond their combinations. Even individualization algorithms, ironically, work by first grouping many individuals into a small number of similarity classes.

At least for now, the more we advance artificial intelligence, the more we marvel at human intelligence. Deep Blue vs. Kasparov, or AlphaGo vs. Lee, were not fair comparisons: the machines used four orders of magnitude more energy per second! Both the biological mechanisms that generate energy from food and the amount of work we do per Joule must be astounding to machines envy. Can AI be as energy efficient as it is fast? Can it take in energy sources other than electricity? When someday it does, and when combined with sensors and robotics that touch the physical world, youd have to wonder about the fundamental differences between humans and machines.

Can AI, one day, make AI? And stop AI?

Can AI laugh, cry and dream? Can it contain multitudes and contradictions like Walt Whitman?

Will AI be aware of itself, and will it have a soul, however awareness and souls are defined? Will it also be T.S. Eliots infinitely suffering things?

Where does an AI life start and stop anyway? What constitutes the identity of one AI, and how can it live without having to die? Indeed, if the memory and logic chips sustain and merge, is AI all collectively one life? And if AI duplicates a humans mind and memory, is that human life going to stay on forever, too?

These questions will stay hypothetical until breakthroughs more architectural than just compounding silicon chips speed and exploding data to black-box algorithms.

However, if given sufficient time and as a matter of time, some of these questions are bound to eventually become real, what then is uniquely human? What would still be artificial about artificial intelligence? Some of that eventuality might, with bumps and twists, show up faster than we had thought. Perhaps in your generation!

Freedoms and Rights

If Boilermakers must face these questions, perhaps it does less harm to consider off switches controlled by individual citizens than a ban by some bureaucracy. May the medicine be no worse than the disease, and regulations by government agencies not be granular or static, for governments dont have a track record of understanding fast-changing technologies, let alone micromanaging them. Some might even argue that government access to data and arbitration of algorithms counts among the most worrisome uses of AI.

What we need are basic guardrails of accountability, in data usage compensation, intellectual property rights and legal liability.

We need skepticism in scrutinizing the dependence of AI engines output on their input. Data tends to feed on itself, and machines often give humans what we want to see.

We need to preserve dissent even when its inconvenient, and avoid philosopher kings dressed in AI even when the alternative appears inefficient.

We need entrepreneurs in free markets to invent competing AI systems and independently maximize choices outside the big tech oligopoly. Some of them will invent ways to break big data.

Where, when and how is data collected, stored and used? Like many technologies, AI is born neutral but suffers the natural tendency of being abused, especially in the name of the collective good. Todays most urgent and gravest nightmare of AI is its abuse by authoritarian regimes to irreversibly lock in the Orwellian 1984: the surveillance state oppressing rights, aided and abetted by AI three-quarters of a century after that bleak prophecy.

We need verifiable principles of individual rights, reflecting the Constitution of our country, in the age of data and machines around the globe. For example, MOTA:

My worst fear about AI is that it shrinks individual freedom. Our best hope for AI is that it advances individual freedom. That it presents more options, not more homogeneity. That the freedom to choose and free will still prevail.

Let us preserve the rights that survived other alarming headlines in centuries past.

Let our students sharpen the ability to doubt, debate and dissent.

Let a university, like Purdue, present the vista of intellectual conflicts and the toil of critical thinking.

Closing

Now, about asking AI engines to write this speech. We did ask it to write a commencement speech for the president of Purdue University on the topic of AI, after I finished drafting my own.

Im probably not intelligent enough or didnt trust the circular clichs on the web, but what I wrote had almost no overlap with what AI did. I might be biased, but the AI version reads like a B- high school essay, a grammatically correct synthesis with little specificity, originality or humor. Its so toxically generic that even adding a human in the loop to build on it proved futile. Its so boring that you would have fallen asleep even faster than you just did. By the way, you can wake up now: Im wrapping up at last.

Maybe most commencement speeches and strategic plans sound about the same: Universities have made it too easy for language models! Maybe AI can remind us to try and be a little less boring in what we say and how we think. Maybe bots can murmur: Dont you ChatGPT me whenever were just echoing in an ever smaller and louder echo chamber down to the templated syntax and tired words. Smarter AI might lead to more interesting humans.

Well, there were a few words of overlap between my draft and AIs. So, heres from both some bytes living in a chip and a human Boilermaker to you all on this 2023 Purdue Spring Commencement: Congratulations, and Boiler Up!

Read more:
Purdue President Chiang to grads: Let Boilermakers lead in ... - Purdue University

Read More..

Pakistani finance minister says crypto will never be legal because of FATF – Cointelegraph

Pakistan will ban cryptocurrency services operating in the country and never legalize crypto trading, Minister of State for Finance and Revenue Aisha Ghaus Pasha said at a session of the Senate Standing Committee on Finance and Revenue on May 16, according to multiple local media reports. Other officials, including State Bank of Pakistan (SBP) Director Sohail Jawad, spoke in favor of the decision.

Pasha said banning crypto was one of the requirements set by the Financial Action Task Force (FATF), which removed Pakistan from its gray list in October. The gray list contains countries the body considers deficient in Anti-Money Laundering and Counter-Terrorist Financing measures but that are working with it to remedy their shortcomings.

The SBP and the Information and Technology Ministry were drafting the legislation for the ban, according to reports.

Related: Pakistan's president calls for more training in blockchain technology

The Pakistani Crypto Twitter community unleashed a frenzy of disapproval of the coming crypto ban. I pray that government focuses on the right area which lead to scams and the apps which traps people instead of banning crypto, Daniyal Azamwrote. People are making handsome income with crypto trading and Govt want to take this last hope from Poor People of Pakistan, Crypto Arenasaid.

FATF cannot impose sanctions on non-compliant countries, but its findings are likely to influence government and corporate policies worldwide. Pakistans economy is in deep crisis, and it is currently engaged in tense bailout negotiations with the International Monetary Fund, so a clean report from the FATF may be a political priority.

Crypto adoption in the country has been relatively high, with Pakistani citizens reportedly holding $20 billion worth of crypto in 2021. Government opposition to crypto is not new, however. The SBP has reportedly been seeking a crypto ban since at least January. Pakistan does, however, have plans to launch a central bank digital currency in 2025 and recently adopted a national blockchain Know Your Customer platform.

Magazine: Rogue states dodge economic sanctions, but is crypto in the wrong?

See original here:
Pakistani finance minister says crypto will never be legal because of FATF - Cointelegraph

Read More..