Page 886«..1020..885886887888..900910..»

Analyst Firm Bernstein Predicts Fundamental Bull Rally of Bitcoin … – Analytics Insight

In contrast with the overall expectations and prevailing investor sentiments, the crypto market may be gearing up for an exponential rally, and Bitcoin Spark could be the next big project. A financial research firm named Bernstein has predicted that institutions and venture capital firms will lead the imminent surge. The financial research institution probably cited the most recent win by a crypto-oriented investment firm, Grayscale, against the US Securities and Exchange Commission on the grounds of offering a Bitcoin Exchange Traded Fund (ETF). This, alongside other pending Bitcoin ETF applications, could signal the onset of an aggressive bull run fuelled by institutions and investment firms entering the space.

Bitcoin uses the proof-of- work consensus mechanism to validate and manage the networks operations. This process is typically expensive and requires a significant initial capital investment keeping off retail miners who would want to participate. However, retailers can still participate in mining by investing in mining pools. One may start mining from home by obtaining a Bitcoin wallet and installing Bitcoin mining software.

Bitcoin Spark is Bitcoins newest Bitcoin alternative. There are more than 100 Bitcoin alternatives in existence today. However, Bitcoin Spark stands out as the most beneficial network so far. From establishing a gasless network to having scalable smart contracts, the platform is a sleek, state-of-the-art network. Bitcoin Spark has the same tokenomics as Bitcoin, which is the max token supply capped at 21 million coins.

Bitcoin Spark has an established and realistic roadmap with details on its development highlighted in 5 stages. The first stage comprises three objectives: to have an Ethereum IOU token contract, which three auditing firms have already audited; Centralized exchange pre-listing agreements will also be finalized during this stage. The network will also test the imminent Bitcoin Spark ledgers internal stress and load tests.

During the ongoing ICO, Bitcoin Spark developers intend to distribute 4 million tokens in ten distinct stages. The first, second, and third phases have already been completed, and the fourth phase is currently running. During this stage, investors can buy BTCS tokens at a relatively low price, pegged at $2.25 per token. Those who buy will also get an additional 10% bonus from the platform creators.

Compared to the functionalities and utilities that the network has, BTCS tokens are significantly undervalued and have the potential to make significant gains as the launch coincides with an imminent bull market, according to analysts. The ICO has only ten phases, and the network is set to launch on November 30th at a BTCS launch price of $10. Buying BTCS tokens now will yield 489% returns on November 30th.

Buying Bitcoin Sparks native tokens is like buying Bitcoin in 2009. If you had this golden opportunity, how much Bitcoin would you buy?

The Bitcoin Spark network intends to incorporate a revenue generation feature with two primary sources of income. The income generated from these two sources will be utilized to pay miners and network validators. This also means the network will cut all transaction fees charged to network participants when the project runs live on mainnet for one or two years. The platforms miners will generate processing power that will be issued to external entities for BTCS. Secondly, the platform will have ad slots on the website and software applications where brands will sell their products and services through advertisement.

Website: https://bitcoinspark.org/

Buy BTCS: https://network.bitcoinspark.org/register

See original here:

Analyst Firm Bernstein Predicts Fundamental Bull Rally of Bitcoin ... - Analytics Insight

Read More..

Expert Says Demand Triggered Shiba Inu 46,000,000% Rise, Not … – The Crypto Basic

Shiba Inu burn tracker asserts that SHIBs meteoric rise in 2021 was not a direct result of Ethereum founder Vitalik Buterins burn but a result of demand.

Shibburn, the community-driven Shiba Inu burn tracker, recently set the record straight regarding the influence of Ethereum founderVitalik Buterins burn of 410.2 trillion SHIB on the assets past rally.

Contrary to popular belief, Shibburn asserted in a post that it was the growing demand that ignited SHIBs meteoric rise in 2021 and not necessarily Vitaliks burn.

- Advertisement -

The burn tracker clarified that Vitalik did not spend $1.7 billion to incinerate SHIB tokens so that the burn could impact the price at the time. Note that the value of the 410 trillion tokens nowstandsat $3.1 billion as of press time.

Shibburn emphasized that Buterin received the SHIB tokens when the token was initially deployed, and transferring a portion of them to a burn address months later did not directly impact the tokens price positively.

The community-driven burn tracker stressed that the surge in May 2021 was not solely the result of Buterins burn but rather a reflection of the growing interest and investment in SHIB from the broader crypto community.

This substantial demand, combined with the fact that half of the tokens supply was already under the custody of Vitalik Buterin, likely contributed to the assets extraordinary price appreciation.

An accompanying CoinMarketCap chart substantiates these claims. Notably, data from the chart shows that SHIB had already increased by 29,151,160% from a low of $0.000000000119 on Jan. 1, 2021, to a high of $0.00003469 on May 11, 2021.

This remarkable growth occurred before Vitalik Buterins burn, indicating that SHIBs rally was well underway before this event occurred.

However, following the initial rally,Shiba Inuexperienced a temporary setback, retracing from the previous high. Buterins burn occurred on May 16 during this correction phase.

The burn did little to stop the correction, with SHIBs price declining to $0.00000621 in June 2021. Nevertheless, the asset bounced back, surging 1,324% from that point to reach an all-time high of $0.00008845 in October 2021.

Shibburn pointed out that Shiba Inu is up over 46M% since 2020 but only 5x since May 6, 2021. This reality further confirms that the assets meteoric rally occurred before the burn.

Shiba Inu lead developer Shytoshi Kusamapreviously made similar remarkson the back of calls for sustained burns. According to Kusama, burns alone cannot trigger a massive SHIB rally but utility and demand.

Follow Us on Twitter and Facebook.

Disclaimer: This content is informational and should not be considered financial advice. The views expressed in this article may include the author's personal opinions and do not reflect The Crypto Basics opinion. Readers are encouraged to do thorough research before making any investment decisions. The Crypto Basic is not responsible for any financial losses.

-Advertisement-

View original post here:

Expert Says Demand Triggered Shiba Inu 46,000,000% Rise, Not ... - The Crypto Basic

Read More..

Adarsh Tripathi clinches Olomouc Chess Summer 2023 B2 … – ChessBase India

by Shahid Ahmed - 10/09/2023

Adarsh Tripathi scored an unbeaten 6.5/9 to winOlomouc Chess Summer 2023 B2 Sportclub A64 Cup IM tournament.He finished a full point ahead of the field. Three players - FM Jakub Kusa (CZE), FM Jachym Nemec (CZE) and FM Michal Koziorowicz (POL) scored 5.5/9. They were placed second to fourth respectively. FM Sehyun Kwon (KOR)scored 7/9 to win Olomouc Chess Summer 2023 B1 AVE Chess Cup IM tournament. He finished a half point ahead of the rest. IM Sebastian Plischi (GER, 2319) scored sole 6.5/9 to secure second place. FM R Ashwath scored 6/9 to be placed third. The top three prizes in each event was CZK 3000 + trophy, 2000 and 1000 each respectively. Photos: Kamil Mike and Jakub Fuksk

Olomouc Chess Summer 2023, Jaroslav Fusik's memorial, part of Czech Tour 2023 series, had a total of nine tournaments including four round-robin events were held from 11th to 19th August 2023. Details of the events can be found here.

B2 Sportclub A64 Cup top three (Lto R): 2nd FM Jakub Kusa (CZE) 5.5/9, 1stAdarsh Tripathi 6.5/9 and 3rd FM Jachym Nemec (CZE) 5.5/9

B1 AVE Chess Cup top 3 (L to R): 2nd IM Sebastian Plischki (GER, not in picture) 6.5/9, 1st FM Sehyun Kwon (KOR) 7/9 and 3rd FM R Ashwath 6/9

Position after 30...Rg8?

30...Rg8? allowed Adarsh Tripathi (2286) tocombined the power of his knight and protected center passed pawnagainst Ronit Levitan (2131) 31.Nf6 Rh8 32.d5 Kb7 33.e6 because e8 is controlledby the well-placed knight at f6. 33...fxe634.dxe6 g4 35. Kd4 Kxb6 36.e7 h4 37.Nd7+ idea is to shield the e8-square by Nf8.

Round 5: Adarsh Tripathi - Ronit Levitan (ISR): 1-0

Adarsh Tripathi in action during various rounds

FM Adarsh Tripathi scored 6.5/9to win the tournament and gained 30.2 Elo rating points

Players in action in various events

Trophies for the prize winners

The venue at Hotel Central Park Flora inOlomouc, Czech Republic

Daylight view of Olomouc

Post sunsetview of Olomouc

Both B1 and B2 IM tournaments were a 10-player round robin tournament. They were organized by SPORTCLUB Agency 64 Olomouc in cooperation with AVE CHESS at Flora hotel in Olomouc, Czech Republic from 12th to 19th August 2023. The time control of the event was 40 moves in 90 minutes + 30 minutes + 30 seconds increment per move.

Details

Details

Official site

Tournament Details

Read more:
Adarsh Tripathi clinches Olomouc Chess Summer 2023 B2 ... - ChessBase India

Read More..

A Chess Coach and a Restaurateur Are Likely to Join Portland City … – Willamette Week

Ricky Gomez, who owns the award-winning Cuban bar and restaurant Palomar on the Central Eastside, is likely going to run for one of the 12 Portland City Council seats up for grabs next year.

I havent officially declared yet and am still in the process of evaluating the position and election cycle, Gomez said in an email to WW.

Another likely hopeful is Chad Lykins, owner of Rose City Chess, a chess club thats coached winning chess teams for years. Lykins is likely to run in District 4, which includes all of the westside and a sliver of Southeast Portland. Lykins holds a doctorate in leadership and policy studies from Vanderbilt University and is Oregons delegate to the United States Chess Federation.

Lykins declined to comment, saying he wont be answering any questions from the media about City Council for at least a few more weeks.

Gomez and Lykins join a growing list of City Council hopefuls. More than 15 candidates so far have either filed for the citys Small Donor Elections program, registered a political action committee with the state, or publicly declared their intent to run in one of the new City Councils four geographic voting districts next year.

The future City Council is a dramatic expansion of the current five-member council thanks to a ballot measure approved by voters last fall that radically reshaped how the city functions, including the role of the City Council itself. Members of the council will no longer manage a portfolio of bureaus as they do under the current system; instead, they will be full-time city policymakers. Bureaus will be managed by a professional city administrator.

Candidates in the running already include Robin Ye, chief of staff to state Rep. Khanh Pham (D-East Portland); transportation advocate Steph Routh; city Housing Bureau employee Chris Flanary; and Tony Morse, policy director at Oregon Recovers.

See the original post:
A Chess Coach and a Restaurateur Are Likely to Join Portland City ... - Willamette Week

Read More..

Commentary: Reading the chess move of Huawei’s Mate 60 Pro … – CNA

This complicates the picture considerably, given that the semiconductor industry requires multiple levels of coordination and cooperation. Take for example, the process of outsourced assembly, testing and packaging (OSAT).

The analogy Samsung uses is the semiconductor as a human brain and its packaging, the nervous system and skeletal structure. The last stage of fabrication is the package test, during which the packaged chip undergoes final quality assurance procedures.

Companies who work with the Chinese in sensitive industries would risk the ire of America, making it difficult for them to gain future access to US high-tech know-how. That said, it is not impossible for secrets to be somehow leaked in such work.

The fact that the US lacks onshore OSAT capacity could pose security risks. The process of packaging represents a handover in the ownership and control of the device from the manufacturer to the packager and becomes a natural entry point for something to happen.

Cognisant of Americas technological full-court press against it, China has adopted a whole-of-society approach against what it sees as hostile foreign forces being out to contain and defeat it.

Just last month, the Ministry of State Security, which oversees Chinas intelligence activities, warned of espionage activities conducted against the country and called for the participation of the masses against such external threats. Relating this to technology, the implications are clear: The West is attempting to beat China down, and it is the moral duty of Chinese citizens to ensure that China does not lose the technological war.

Read the rest here:
Commentary: Reading the chess move of Huawei's Mate 60 Pro ... - CNA

Read More..

Yale researchers investigate the future of AI in healthcare – Yale Daily News

Michelle Foley

Picture a world where healthcare is not confined to a clinic.

The watch on your wrist ticks steadily throughout the day, collecting and transmitting information about your heart rate, oxygen saturation and the levels of sugar in your blood. Sensors scan your face and body, making inferences about your state of health.

By the time you see a doctor, algorithms have already synthesized this data and organized it in ways that fit a diagnosis, detecting health problems before symptoms arise.

We arent there yet, but, according to Harlan Krumholz, a professor of medicine at the School of Medicine, this could be the future of healthcare powered by artificial intelligence.

This is an entirely historic juncture in the history of medicine, Krumholz said. What were going to be able to do in the next decades, compared to what we have been able to do, is going to be fundamentally different and much better.

Over the past months, Yale researchers have published a variety of papers on machine learning in medicine, from wearable devices that can detect heart defects to algorithms that can triage COVID-19 patients. Though much of this technology is still in development, the rapid surge of AI innovation has prompted experts to consider how it will impact healthcare in the near future.

Questions remain about the reliability of AI conclusions, the ethics of using AI to treat patients and how this technology might transform the healthcare landscape.

Synergy: human and artificial intelligence at Yale

Two recent Yale studies highlight what the future of AI-assisted health care could look like.

In August, researchers at the School of Medicine developed an algorithm to diagnose aortic stenosis, a narrowing of a valve in the bodys largest blood vessel. Currently, diagnosis usually entails a preliminary screening by the patients primary care provider and then a visit to the radiologist, where the patient must undergo a diagnostic doppler exam.

The new Yale algorithm, however, can diagnose a patient from just an echocardiogram performed by a primary care doctor.

We are at the cusp of doing transformative work in diagnosing a lot of conditions that otherwise we were missing in our clinical care, said Dr. Rohan Khera, senior author of the study and clinical director of the Yale Center for Outcomes Research & Evaluation, CORE. All this work is powered by patients and their data, and how we intend to use it is to give back to the most underserved communities. Thats our big focus area.

The algorithm was also designed to be compatible with cheap and accessible handheld ultrasound machines, said lead author Evangelos Oikonomou, a clinical fellow at the School of Medicine. This would bring first-stage aortic stenosis testing to the community, instead of being limited to those that are referred to a skilled and potentially expensive radiologist. It could also allow the disease to be diagnosed before symptoms arise.

In a second study, researchers used AI to support physicians in hospitals by predicting COVID-19 outcomes for emergency room patients all within 12 hours.

According to first author Georgia Charkoftaki, an associate research scientist at the Yale School of Public Health, hospitals often run out of beds during COVID-19 outbreaks. AI-powered predictions could help determine which patients need inpatient care and which patients can safely recover at home.

The algorithm is also designed to be adaptable to other diseases.

When [Respiratory Syncytial Virus] babies come to the ICU, they are given the standard of care, but not all of them respond, Charkoftaki said. Some are intubated, others are out in a week. The symptoms [of RSV] are similar to COVID and so we are working on a study for clinical metabolomics there as well.

However, AI isnt always accurate, Charkoftaki admitted.

As such, Charkoftaki said that medical professionals need to use AI in a smart way.

Dont take it blindly, but use it to benefit patients and the discovery of new drugs, Charkoftaki told the News. You always need a brain behind it.

Machines in medicine

Though the concept of artificial intelligence has existed since mathematician Alan Turings work in the 1950s, the release of ChatGPT in November 2022 brought AI into public conversation. The chatbot garnered widespread attention, reaching over 100 million users in two months.

According to Lawrence Staib ENG 90, a professor of radiology and biomedical engineering, AI-powered healthcare does not yet consist of asking a sentient chatbot medical questions. Staib, who regularly uses machine learning models in his research with medical imaging, says AI interfaces are more similar to a calculator: users input data, an algorithm runs and it generates an output, like a number, image, or cancer stage. The use of these algorithms is still relatively uncommon in most medical fields.

While the recent public conversation on AI has centered around large language models programs like ChatGPT which are trained to understand text in context rather than as isolated words these algorithms are not the focus of most AI innovation in healthcare, Staib said.

Instead, researchers are using machine learning in healthcare to recognize patterns humans would not detect. When trained on large databases, machine learning models often identify hidden signals, said David van Dijk, an assistant professor of medicine and computer science. In his research, van Dijk works to develop novel algorithms for discovering these hidden signals, which include biomarkers and disease mechanisms, to diagnose patients and determine prognosis.

Youre looking for something thats hidden in the data, van Dijk said. Youre looking for signatures that may be important for studying that disease.

Staib added that these hidden signals are also found in medical imaging.

In a computerized tomography or CT scan, for example, a machine learning algorithm can identify subtle elements of the image that even a trained radiologist might miss.

While these pattern recognition algorithms could be helpful in analyzing patient data, it is sometimes unclear how they arrive at conclusions and how reliable those conclusions are.

It may be picking up something, and it may be pretty accurate, but it may not be clear what its actually detecting, Staib cautions.

One famous example of that ambiguity occurred at the University of Washington, where researchers designed a machine learning model to distinguish between wolves and huskies. Since all the images of wolves were taken in snowy forests and all the images of huskies were taken in Arizona, the model learned to identify the species based on their environment. When the algorithm was given an image of a husky in the snow, it was always classified as a wolf.

To address this issue, researchers are working on explainable artificial intelligence: the kind of program, Staib said, that not only makes a judgment, but also tells you how it made that judgment or how confident it is in that judgment.

Experts say that the goal of a partnership between human experts and AI is to reduce human error and clarify AIs judgment process.

In medicine, well-intended practitioners still sometimes miss key pieces of information, Krumholtz said.

Algorithms, Krumholtz said, can make sure that nothing falls through the cracks.

But he added the need for human oversight will not go away.

Ultimately, medicine still requires intense human judgements, he said.

Big data and its pitfalls

The key to training a successful machine-learning model is data and lots of it. But where this data comes from and how it is used can raise ethical questions, said Bonnie Kaplan, a professor of biostatistics and faculty affiliate at the Solomon Center for Health Law and Policy at Yale Law School.

The Health Insurance Portability and Accountability Act, or HIPPA, regulates patient data collected in healthcare institutions, such as hospitals, clinics, nursing homes and dentists offices, Kaplan said. If this data is scrubbed of identifying details, though, health institutions can sell it without patient consent.

This kind of scrubbed patient information constitutes much of the data with which health-related machine learning models are trained.

Still, health data is collected in places beyond healthcare institutions, like on period tracking apps, genetics websites and social media. Depending on the agreements that users sign knowingly or not to access these services, related health data can be sold with identifying information and without consent, experts say. And if scrubbed patient data is combined with this unregulated health data, it becomes relatively easy to identify people, which in turn poses a serious privacy risk.

Healthcare data can be stigmatizing, Kaplan told the News. It can be used to deny insurance or credit or employment.

For researchers, AI in healthcare raises other questions as well: who is responsible for regulating it, what privacy protections should be in place and who is liable if something goes wrong.

Kaplan said that while theres a general sense of what constitutes ethical AI usage, how to achieve [it], or even define the words, is not clear.

While some, like Krumholz, are optimistic about the future of AI in healthcare, others like Kaplan point out that much of the current discourse remains speculative.

Weve got all these promises that AI is going to revolutionize healthcare, Kaplan said. I think thats overblown, but still very motivating. We dont get those utopian dreams, but we do get a lot of great stuff.

Sixty million people use ChatGPT every day.

Hannah Mark covers Science and Society for the SciTech desk and occasionally writes for the WKND. Originally from Montana, she is a junior majoring in History of Science, Medicine, and Public Health.

Valentina Simon covers Astronomy, Computer Science and Engineering stories. She is a freshman in Timothy Dwight College majoring in Data Science and Statistics.

See the original post here:
Yale researchers investigate the future of AI in healthcare - Yale Daily News

Read More..

Microchip Launches the MPLAB Machine Learning Development Suite for 8-, 16-, 32-Bit MCUs and MPUs – Hackster.io

Microchip has announced the launch of a new software package designed to put machine learning workloads onto eight-, 16-, and 32-bit microcontrollers and processors: the MPLAB Machine Learning Development Suite.

"Machine Learning is the new normal for embedded controllers, and utilizing it at the edge allows a product to be efficient, more secure and use less power than systems that rely on cloud communication for processing," claims Microchip's Rodger Richey of the core benefits behind on-device machine learning with resource-constrained hardware, known as "tinyML." Microchip's unique, integrated solution is designed for embedded engineers and is the first to support not just 32-bit MCUs and MPUs [Microcontroller Units and Microprocessor Units], but also 8- and 16-bit devices to enable efficient product development."

Designed for use alongside the MPLAB X Integrated Development Environment (IDE), the machine learning toolkit allows the developer to build machine learning models suitable for flashing to Microchip's various microcontroller and processor parts taking into account their limited resources compared to desktop computers or cloud servers. Driven by AutoML and with the option to use cloud computing resources to find the best algorithm for a given task, the package aims to cover feature extraction, training, validation, and testing in one, with an application programming interface (API) convertible to Python.

While Microchip had already supported the use of existing deep neural network (DNN) models from TensorFlow Lite on its microcontrollers, the launch of the MPLAB Machine Learning Development Suite demonstrates a desire to provide everything a developer needs to build something from the ground up and joins MPLAB Harmony V3 and the VectorBlox accelerator Software Development Kit (SDK), the latter designed for use with Microchip's various field-programmable gate array (FPGA) parts, in the company's on-device machine learning line-up.

The software is free for trial use on up to 1GB of data and with 2,500 labels plus five hours a month of AutoML CPU time, but no rights to deploy models for purposes other than evaluation; a standard license offers 10GB of data, unlimited labels, and 10 hours a month of CPU time, plus a license to deploy models in production for $89 a month; a "pro" license increases the CPU time to 250 hours a year (20.8 hours a month) and offers the option to output source code, rather than a pre-compiled library.

More information on the MPLAB Machine Learning Development Suite is available on the Microchip website, along with a getting-started guide which walks the reader through creating models for fan state monitoring and gesture recognition and running them on SAM D21 and AVR devices.

See the article here:
Microchip Launches the MPLAB Machine Learning Development Suite for 8-, 16-, 32-Bit MCUs and MPUs - Hackster.io

Read More..

Indigenous knowledges informing ‘machine learning’ could prevent stolen art and other culturally unsafe AI practices – The Conversation Indonesia

Artificial intelligence (AI) relies on its creators for training, otherwise known as machine learning. Machine learning is the process by which the machine generates its intelligence through outside input.

But its behaviour is determined by the information it is provided. And at the moment, AI is a white male dominated field.

How can we ensure the evolution of AI doesnt further encroach on Indigenous rights and data sovereignty?

AI has the ability to generate art, and anyone can create Indigenous art using this machine. Even before AI, Aboriginal art has widely been appropriated and reproduced without attribution or acknowledgement, particularly for tourism industries.

And this could worsen with people now being able to generate art through AI. This is an issue not just experienced by Indigenous people, with many artists affected by their art styles being misappropriated.

Indigenous art is embedded with history and connects to culture and Country. AI-created Indigenous art would lack this. There are also implications for financial gain bypassing Indigenous artists and going to the producers of the technology.

Including Indigenous people in creating AI or deciding what AI can learn, could help minimise exploitation of Indigenous artists and their art.

Read more: AI can reinforce discrimination but used correctly it could make hiring more inclusive

In Australia there is a long history of collecting data about Aboriginal and Torres Strait Islander people. But there has been little data collected for or with Aboriginal and Torres Strait Islander people. Aboriginal scholars Maggie Walter and Jacon Prehn write of this in the context of the growing Indigenous Data Sovereignty movement.

Indigenous Data Sovereignty is concerned with the rights of Indigenous peoples to own, control, access and possess their own data, and decide who to give it to. Globally, Indigenous peoples are pushing for formal agreements on Indigenous Data Sovereignty.

Many Indigenous people are concerned with how the data involving our knowledges and cultural practices is being used. This has resulted in some Indigenous lawyers finding ways to integrate intellectual property with cultural rights.

Mori scholar Karaitiana Taiuru says:

If Indigenous peoples dont have sovereignty of their own data, they will simply be re-colonised in this information society.

Indigenous people are already collaborating on research that draws on Indigenous knowledges and involves AI.

In the wetlands of Kakadu, rangers are using AI and Indigenous knowledges to care for Country.

A weed called para grass is having a negative impact on magpie geese, which have been in decline. While the Kakadu rangers are doing their best to control the issue, the sheer size of the area (two million hectares), makes this difficult.

Collecting and analysing information about magpie geese and the impact of para grass using drones is having a positive influence on goose numbers.

Projects like these are vital given the loss of biodiversity around the globe that is causing species extinctions and ecosystem loss at alarming rates. As a result of this collaboration thousands of magpie geese are returning to Country to roost.

This project involves Traditional land owners (collectively known as Bininj in the north of Kakadu National Park and Mungguy in the south) working with rangers and researchers to help protect the environment and preserve biodiversity.

By working with Traditional Owners, monitoring systems were able to be programmed with geographically-specific knowledge, not otherwise recorded, reflecting the connection of Indigenous people with the land. This collaboration highlights the need to ensure Indigenous-led approaches.

In another example, in Sanikiluaq, an Inuit community in Nunavut, Canada, a project called PolArtic uses scientific data with Indigenous knowledges to assess the location of, and manage, fisheries.

Changing climate patterns are affecting the availability of fish, and this is another example where Indigenous knowledges are providing solutions for biodiversity issues caused by the global climate crisis.

Indigital is an Indigenous-owned profit-for-purpose company founded by Dharug, Cabrogal innovator Mikaela Jade. Jade has worked with traditional owners of Kakadu to use augmented reality to tell their stories on Country.

Indigital is also providing pathways for mob who are keen to learn more about digital technologies and combine them with their knowledges.

Read more: How should Australia capitalise on AI while reducing its risks? It's time to have your say

Although AI is a powerful tool, it is limited by the data which inform it. The success of the above projects is because AI was informed by Indigenous knowledges, provided by Indigenous knowledge holders who have a long held ancestral relationship with the land, animals and environment.

Research indicates AI is a white male-dominated industry. A global study found 12% of professionals across all levels were female, with only 4% being people of colour. Indigenous participation was not noted.

In early June, the Australian governments Safe and Responsible AI in Australia discussion paper found racial and gender biases evident in AI. Racial biases occurred, the paper found, in situations such as where AI had been used to predict criminal behaviour.

The purpose of the study was to seek feedback on how to lessen potential risks of harm from AI. Advisory groups and consultation processes were raised as possibilities to address this, but not explored in any real depth.

Indigenous knowledges have a lot to offer in the development of new technologies including AI. Art is part of our cultures, ceremonies, and identity. AI-generated art presents the risk of mass reproduction without Indigenous input or ownership, and misrepresentation of culture.

The federal government needs to consider Indigenous Knowledges informing the machine learning informing AI, supporting data sovereignty. There is an opportunity for Australia to become a global leader in pursuing technology advancement ethically.

Here is the original post:
Indigenous knowledges informing 'machine learning' could prevent stolen art and other culturally unsafe AI practices - The Conversation Indonesia

Read More..

Why Consider Python for Machine Learning and AI? – Analytics Insight

Here is why you should consider Python for Machine Learning and AI

Python has emerged as the preferred programming language for machine learning and artificial intelligence (AI) applications. Its versatility, ease of use, and extensive library support make it the top choice for data scientists, researchers, and engineers working in these fields. In this article, well explore the key reasons why Python is the go-to language for Machine Learning and AI.

Python boasts a rich ecosystem of libraries and frameworks that simplify machine learning and AI development. Two of the most prominent libraries are TensorFlow and PyTorch, which provide tools and resources for building and training deep learning models. Scikit-Learn is another widely used library for various machine learning tasks. These libraries offer pre-built modules, making it easier to implement complex algorithms and neural networks.

Pythons clean and readable syntax is beginner-friendly, making it accessible to a wide range of developers, including those new to machine learning and AI. Its code is similar to pseudo-code, which is human-readable and intuitive. This readability reduces the learning curve and fosters collaboration among teams with diverse backgrounds.

Python has a vibrant and active community of developers, data scientists, and researchers. This community support translates into a wealth of resources, tutorials, and forums where individuals can seek help, share knowledge, and collaborate on projects. As a result, Python users benefit from continuous improvements, updates, and innovations.

Python is cross-platform, which means that it can be used to execute applications on Windows, macOS, and Linux, among other operating systems. This flexibility allows developers to work on their preferred environments and seamlessly transition between different platforms without worrying about compatibility issues.

Pythons libraries, such as Pandas and NumPy, excel in data manipulation and analysis. These libraries facilitate tasks like data preprocessing, cleaning, and transformation, which are crucial for machine learning and AI projects. Pythons ease of working with structured and unstructured data makes it a top choice for data-centric applications.

Visualizing data is an essential aspect of data analysis and model evaluation. Pythons libraries, like Matplotlib, Seaborn, and Plotly, provide versatile tools for creating informative and interactive data visualizations. Effective visualization aids in gaining insights from data and communicating findings effectively.

Python can seamlessly integrate with big data technologies such as Apache Hadoop and Spark. Libraries like PySpark enable data scientists to process and analyze massive datasets, making Python an ideal choice for AI applications that involve large-scale data processing.

Python has strong support for cloud services like AWS, Google Cloud, and Azure. Developers can leverage Pythons libraries and SDKs to interact with cloud resources, enabling scalable and cost-effective deployment of machine learning and AI models.

View post:
Why Consider Python for Machine Learning and AI? - Analytics Insight

Read More..

Learning and predicting the unknown class using evidential deep learning | Scientific Reports – Nature.com

I investigated whether m-EDL has the same performance as EDL through comparative experiments. I also investigated whether m-EDL has an advantage when including class u in the training data. The objective of this evaluation was to determine the following:

(Q1): whether the use of m-EDL reduces the prediction accuracy for a class k when the same training and test data are given to EDL and m-EDL models;

(Q2): whether a) an m-EDL model that has learned class u has the same prediction accuracy for a class k when compared with an EDL model that cannot learn class u, and b) m-EDL predicts class u with higher accuracy than EDL;

(Q3): if the ratio of class u data included in the training data affects the accuracy of predicting classes k and u in the test data;

(Q4): what happens when the properties of class u data that are blended with the training data and test data in Q2 and Q3 are exactly the same.

To answer these questions, several datasets and models were prepared. Conditions that depended on whether data from class u were included in the training and/or test data, as well as which model was used to learn the data, were used in the evaluation.

Here, I evaluate whether the performance of m-EDL is comparable to that of EDL in the situation assumed by EDL; that is, the situation where all training and test data belong to class k. In other words, both the training and test data were composed only of images from MNIST, and the following two conditions were compared: (1) the EDL model trained and tested on datasets with no class u data and (2) the m-EDL model trained and tested on datasets with no class u data.

Figure3 compares the accuracies of EDL (thin solid red line) and m-EDL (thick solid blue line). Each line shows the mean value and the shaded areas indicate the standard deviation. The accuracy of EDL changes with respect to each uncertainty threshold; the accuracy is plotted on the vertical axis with the uncertainty threshold indicated by the horizontal axis. The accuracy of EDL improves as the threshold decreases because only a classification result the model is confident of is treated as a classification result. Figure3a shows the results when (widehat{{{varvec{p}}}_{{{varvec{k}}}^{+}}}) is used for the classification results of m-EDL. An uncertainty threshold is not used for the classification result of m-EDL; a result parallel to the horizontal axis is obtained. In contrast, Fig.3b shows the results when (widehat{{{varvec{p}}}_{{{varvec{k}}}^{+}}}) is converted to (overline{{{varvec{p}} }_{{varvec{k}}}}) and the uncertainty threshold used for EDL is also used for m-EDL.

Accuracy of EDL and m-EDL when both the training and test datasets contain no class u data. (a) Results when (widehat{{{varvec{p}}}_{{{varvec{k}}}^{+}}}) is used in m-EDL classification. (b) Results when (overline{{{varvec{p}} }_{{varvec{k}}}}) is converted from (widehat{{{varvec{p}}}_{{{varvec{k}}}^{+}}}) and used in m-EDL classification with the same uncertainty threshold as that of EDL.

These graphs show that the accuracy of m-EDL is lower than that of EDL, except in the region where the uncertainty threshold is 0.9 or more. However, no substantial decrease in accuracy is observed, and it can be said that the performance of m-EDL would be sufficient depending on the application.

In this experiment, the properties of the class u data that are included in the training and test data are completely different; that is, they are obtained from different datasets. This makes it possible to confirm whether the learned uncertain class features are regarded as features that are not class k rather than features that are class u learned during training.

First, I consider whether an m-EDL model that has learned class u has the same prediction accuracy for class k when compared with an EDL model that cannot learn class u (Q2a). I then consider whether it can determine class u with higher prediction accuracy (Q2b).

The following two cases are considered: (1) EDL is tested on data that include Fashion MNIST data, and m-EDL is trained on data that include EMNIST data, but tested on data that include Fashion MNIST data. Figure4ac shows the results for class u rates of 25%, 50%, and 75% in training data, respectively. The lines of different colors indicate the results for class u rates of 25%, 50%, and 75% in the test data (12). These are percentages of the number of MNIST data. Additionally, Table 1 presents the mean accuracies of EDL and mEDL for each condition. (2) EDL is tested on data that include EMNIST data, and m-EDL is trained on data that include Fashion MNIST data, but tested on data that include EMNIST data. Figure4df shows the results for class u rates of 25%, 50%, and 75% in the training data, respectively. The lines of different colors indicate the results for class u rates of 25%, 50%, and 75% in test data. These are percentages of the number of MNIST data. Additionally, Table 2 presents the mean accuracies of EDL and mEDL for each condition.

Accuracy comparison of EDL and m-EDL. Line colors indicate the proportion of class u in the test data, and top and bottom plots show the accuracy for class k data and class u data, respectively. Results when m-EDL has learned class u (EMNIST data) but is tested on Fashion MNIST data for class u mix rates in the training data of (a) 25%, (b) 50%, and (c) 75%. These are percentages of the number of MNIST data. Results when m-EDL has learned class u (Fashion MNIST data) but is tested on EMNIST data for class u mix rates in the training data of (d) 25%, (e) 50%, and (f) 75%.

Under these two conditions, the one-hot vector yj of the data has K=10 dimensions. Therefore, all elements of the one-hot vectors of class u (EMNIST or Fashion MNIST data) in the test data were set to 0. In each of the following cases, the same processing was applied when EDL was tested on data including class u data.

The left plots of Fig.4ac and Table 1 (avg. accuracy for k) show the results for class k data for the first condition. The line color indicates the ratio of the class u data included in the test data, and it is assumed that the accuracy decreases as the mix ratio of class u in the test data increases. The results show that the accuracy of m-EDL with respect to class k is high and robust for the mix rate of class u in the training and test data: it can be seen from the left plots in Fig.4ac that when the m-EDL model that has learned class u is compared with the EDL model, which cannot learn class u, it has equal or higher accuracy with respect to class k. Moreover, the accuracy of m-EDL is not easily affected by the ratio of class u in the test data as well as the training data.

The right plots of Fig.4ac and Table 1 (avg. accuracy for u) show the accuracy for class u data, that is, the accuracy that the data that was judged as I do not know is actually different from the data classes learned so far. The right plots of Fig.4ac show that the accuracy of m-EDL with respect to class u is high and robust for the mix rate of class u in the training and test data. It is natural to increase the accuracy for class u of EDL when the ratio of class u increases because the accuracy increases when the ratio of class u increases even if class u is randomly classified via EDL.

Figure4df and Table 2 (avg. accuracy for k) show the results for the second condition, which is exactly the same as the first condition except that the EMNIST and Fashion MNIST datasets switch roles. Again, the accuracy of m-EDL with respect to class k is high and robust, as in the left plots of Fig.4ac. The results in the left plots of Fig.4df reveal that the m-EDL model that learned class u, when compared with EDL, achieved an equal or higher accuracy with respect to class k, and the accuracy of m-EDL was not easily affected by the ratio of class u in the test and training data.

However, the right plots of Fig.4df and Table 2 (avg. accuracy for u) show that the accuracy of m-EDL with respect to class u cannot be said to be better than that of EDL.

In the comparison of the two patterns in "Performance comparison of EDL and m-EDL when class u is included in the training and test data (Q2)", if the ratio of class u in the training data affects the prediction accuracy of the class k and u data, then the ratio of class u included in the training data must be appropriately selected. To answer whether this is the case, I used the results from "Performance comparison of EDL and m-EDL when class u is included in the training and test data (Q2)" (Fig.4ac and df, which have training data mix ratios of 25%, 50%, and 75%, respectively), and added the following two cases:1) Fashion MNIST is included in the test data, but neither EDL nor m-EDL are trained on class u data (a training data mix ratio of 0%; Fig.5a) and 2) EMNIST is included in the test data, but neither EDL nor m-EDL are trained on class u data (a training data mix ratio of 0%; Fig.5b). The lines of different colors indicate the results for class u rates of 25%, 50%, and 75% in the test data.

Accuracy comparison of EDL and m-EDL when neither EDL nor m-EDL have learned class u. Line colors indicate the mix rate of class u in the test data, and left and right plots show the accuracy for class k data and class u data, respectively. (a) Results for Fashion MNIST data. (b) Results for EMNIST data.

In the left plot of Fig.5a, the accuracy improved for class k as shown in the left plots of Fig.4ac, whereas in the right plot of Fig.5a, there was no improvement in accuracy for class u. In the right plots of Fig.4ac, the accuracy for class u was improved even when the ratio of class u in the training data was small. These results suggest that the accuracy for class u may be improved by having m-EDL learn even a small amount of class u data. Moreover, there is no particular need for these data to be related to the class u data in the test data.

The right plot of Fig.5b shows that m-EDL did not lead to improvements in accuracy for class u. Moreover, in the right plots of Fig.4df, the accuracy of m-EDL for class u is not better than that of EDL; however, when compared with the results in the right plot of Fig.5b, it is clear that the accuracy of m-EDL for class u is improved even if the ratio of class u in the training data is small.

It can be inferred from these comparisons that the amount of accuracy improvement for class u changes depending on the characteristics of class u in the training and test data.

As shown in "Performance comparison of EDL and m-EDL when class u is included in the training and test data (Q2)" and "Effect of the ratio of the class u included in the training data on the prediction accuracy of classes k and u in the test dataset (Q3)", the amount of improvement in accuracy for class u data changes depending on the characteristics of u in the training data and test data. Hence, I evaluated whether the accuracy for class u always improves when the characteristics of u in the training and test data are exactly the same (i.e., when the class u data are from the same dataset).

The following two conditions were considered: (1) when Fashion MNIST is included in both the test and training data [Fig.6ac and Table 3 (avg. accuracy for k and u)] and (2) when EMNIST is included in both the test and training data [Fig.6df and Table 4 (avg. accuracy for k and u)].

Accuracy comparison of EDL and m-EDL. Line colors indicate the proportion of class u in the test data, and top and bottom plots show the accuracy for class k data and class u data, respectively. Results when m-EDL has learned class u (Fashion MNIST) for class u mix rates in the training data of (a) 25%, (b) 50%, and (c) 75%. These are percentages of the number of MNIST data. Results when m-EDL has learned class u (EMNIST)for class u mix rates in the training data of (d) 25%, (e) 50%, and (f) 75%.

The differences in Fig.6ac and df are the mix rates of class u in the training data (25%, 50%, and 75%, respectively). The lines of different colors indicate the results for class u rates of 25%, 50%, and 75% in the test data. These are percentages of the number of MNIST data. In particular, the right-hand side plots of Fig.6af confirm that the accuracy of m-EDL is higher than that in the cases considered for Q2 and Q3 and is almost 100%.

In the cases of Q2 and Q3, the class u data in the training and/or test data have different characteristics, and the accuracy of m-EDL on the class u data changed depending on the combination. Meanwhile, in the Q4 cases, class u data had the same characteristics during both training and testing, and hence, the accuracy is very high. From this, it is clear that the feature learning of class u in the training data contributes to the improvement in accuracy that m-EDL exhibits when learning class u. However, in the comparisons of Q2, particularly when m-EDL was trained using EMNIST and both EDL and m-EDL were tested on data including Fashion MNIST, examples can be found where the accuracy improved even when the unknown classes in the training and test data differ. Therefore, m-EDL has the potential to improve accuracy by excluding uncertain data as a result of learning unrelated data that do not belong to class k data, although this depends on the combination of class u data in the training and test data.

Here, we hypothesize regarding the combination of class u datasets to be mixed during training that will increase the class u accuracy in testing. The hypothesis is that if class u data whose characteristics are as close as possible to those of class k are learned during training, class u data in the test can be discriminated as class u as long as the characteristics of class u given during the test are different from those in training; i.e., if a boundary that can distinguish the range of class k more strictly with u whose characteristics are close to those of class k is learned via mEDL, class u can be easily distinguished. Conversely, if the class u data during training are far from the characteristics of k, the decision boundary between k and u is freely determined, and if the class u data in the test are close to k, they may be incorrectly classified.

To test this hypothesis, I introduced another dataset (Cifar-1040) and evaluated the similarity of the characteristics of different datasets. The Cifar-10 dataset used had images of 2828 pixels for similarity calculation (consistent with the other dataset), which were grayscaled using a previously proposed method41. Table 5 presents the similarity of MNIST, EMNIST, Fashion-MNIST, and Cifar-10. Here, the structural similarity (SSIM) was determined by randomly selecting 500,000 images of the datasets to be compared, and the mean and variance were calculated as the similarity between the datasets.

The distance between datasets was determined as the inverse of the SSIM, and the positional relationship of the datasets on a two-dimensional plane was estimated via multidimensional scaling (MDS)41, as shown in Fig.7.

Location of each dataset estimated via MDS, where the points M, F, E, and C represent the locations of the MNIST, Fashion-MNIST, EMNIST, and Cifar-10 datasets, respectively, and the distance between points is proportional to the inverse of the similarity. The numbers on the horizontal and vertical axes are dimensionless.

As shown in Fig.7, EMNIST was more similar to Fashion-MNIST than to EMNIST. The newly introduced Cifar-10 is an image dataset with characteristics that are more different from those of MNIST than those of both EMNIST and Fashion-MNIST. The hypothesis explains the result presented in "Performance comparison of EDL and m-EDL when class u is included in the training and test data (Q2)" that the accuracy of class u was higher in Case 1 when u was trained with EMNIST and classified with test data containing Fashion MNIST than in Case 2 when u was trained with Fashion-MNIST and classified with test data containing EMNIST. The reason why the accuracy of class u was higher in Case 1 is because the characteristics of EMNIST were closer than those of Fashion-MNIST to the those of MNIST. mEDL-trained EMNIST was able to identify Fashion-MNIST, which was given during testing and had more distant characteristics than EMNIST, as class u. To verify this hypothesis, I compared the accuracy of class u in Case 3, where class u was trained with Cifar-10 and was classified with the test data containing EMNIST, with those for Cases 1 and 2. If the hypothesis is correct, the accuracy of class u should decrease in the following order: Case 1>Case 2>Case 3.

Table 6 presents the accuracies of mEDL for class u in each case. Indeed, the accuracy of Case 3 was the lowest, suggesting that if class u has characteristics close to those of class k during training, class u in the test can be detected as class u as long as the characteristics of class u given during testing are farther than those in the training.

Read the original:
Learning and predicting the unknown class using evidential deep learning | Scientific Reports - Nature.com

Read More..