Page 855«..1020..854855856857..860870..»

Meeranda, the Human-Like AI, Welcomes Recognized Machine … – Canada NewsWire

TORONTO, Sept. 14, 2023 /CNW/ - Meeranda, a privately held Artificial Intelligence (AI) solutions provider, serving both Small and Medium Businesses (SMBs) and Global Multinational Corporations (MNCs), announced today that Francesca Lazzeri, Ph.D., has joined Meeranda's Advisory Board.

Dr. Lazzeri's expertise lies in the field of applied machine learning and AI. She has more than 15 years of experience in academic research, applied machine learning, AI innovation, and engineering team management.

Currently serving as the Senior Director of Data Science and AI, Cloud and AI at Microsoft, Dr. Lazzeri leads a team of skilled data and machine learning scientists. She spearheads the development of intelligent applications on the Cloud, leveraging a wide range of data and techniques including generative AI, time series forecasting, experimentation, causal inference, computer vision, natural language processing, and reinforcement learning.

"We are honored that Dr. Lazzeri has accepted to join Meeranda's Advisory Board,"said Mr. Raji Wahidy, Co-Founder and CEO of Meeranda. "Dr. Lazzeri's contributions to the advancement of machine learning and AI technology are immense, quite well-known, and respected amongst her peers within this sector. Her addition is further validation that what we are embarking on at Meeranda is quite disruptive. We are excited and looking forward to leveraging Dr. Lazzeri's experience and expertise as we work towards delivering The New Personalized Customer Experience we promise to SMBs and Global MNCs."

Academically, Dr. Lazzeri is an Adjunct Professor at New York's Columbia University, teaching Python for machine learning and AI students. She has also contributed to the literature world by authoring several books including "Machine Learning Governance for Managers", "Impact of Artificial Intelligence in Business and Society", and "Machine Learning for Time Series Forecasting with Python."

"We are thrilled to welcome Dr. Lazzeri to Meeranda," said Mr. Jayson Ng, Co-Founder and Chief Research Officer of Meeranda. "Dr. Lazzeri's expertise will be instrumental in bridging the gap between cutting-edge research and real-world applications, thus pushing the technological boundaries and helping us take our product to new heights."

Dr. Lazzeri currently serves as an Advisor on the Advisory Board of the European Union for the AI-CUBE project and as a member of the Women in Data Science (WiDS) initiative. She is also known for having advised, mentored, and coached data scientists and machine learning engineers at the Massachusetts Institute of Technology (MIT) University. She was also a research fellow at Harvard University.

"I am very excited to join Meeranda's Advisory Board,"said Dr. Francesca Lazzeri, Senior Director of Data Science and AI, Cloud and AI at Microsoft. "Meeranda's unique and innovative approach at tackling a very pressing problem is quite disruptive. I strongly believe in their vision, mission, and the leadership team behind Meeranda. I look forward to further contributing to Meeranda's imminent success."

Dr. Lazzeri holds a Master's Degree in Economics and Institutional Studies from Luiss Guido Carli University, a Doctor of Philosophy (Ph.D.) in Economics and Technology Innovation from Scuola Superiore Sant'Anna, and a Postdoc Research Fellowship in Economics from Harvard University.

About Meeranda

Meerandais a privately held Artificial Intelligence (AI) solutions provider, serving Small and Medium Businesses (SMBs) and Global Multinational Corporations (MNCs). Meeranda is best known for its Real-Time Human-Like AI that intends to offer the new personalized customer experience to combat the ongoing frustration of dealing with chatbots and half-baked AI solutions. Although in its early stages, Meeranda already has agreements across six countries and seven industries, thus far.

Follow Meeranda

Website:https://meeranda.comMedia Kit:https://meeranda.com/media-kit/X:https://x.com/HelloMeerandaFacebook: https://www.facebook.com/HelloMeeranda/LinkedIn:https://www.linkedin.com/company/HelloMeerandaInstagram:https://instagram.com/HelloMeerandaThreads:https://instagram.com/HelloMeerandaYouTube:https://www.youtube.com/@meerandaTikTok:https://www.tiktok.com/@meeranda_ai

SOURCE Meeranda

For further information: Meeranda Inc., Media Relations, [emailprotected]

See the original post:
Meeranda, the Human-Like AI, Welcomes Recognized Machine ... - Canada NewsWire

Read More..

Defence force chief Angus Campbell warns deepfakes and AI will drive era of ‘truth decay’ – ABC News

The defence force chief has warned the world is entering an era of "truth decay" where misinformation will undermine democracy and leave liberal, Western societies like Australia increasingly exposed to enemies.

General Angus Campbell has outlined future challenges where rapidly advancing technology could soon make it impossible for most people to distinguish fact from fiction.

In an overnight speech, the defence chief has predicted artificial intelligence and deepfakes will further damage public confidence in elected officials and pose a serious risk.

"As these technologies quickly mature, there may soon come a time when it is impossible for the average person to distinguish fact from fiction, and although a tech counter response can be anticipated, the first impression is often the most powerful," he told the Australian Strategic Policy Institute (ASPI).

"This tech future may accelerate truth decay, greatly challenging the quality of what we call public 'common sense', seriously damaging public confidence in elected officials, and undermining the trust that binds us."

General Campbellcited a "deep fake" video of Ukrainian President Volodymyr Zelenskyy, which emerged last year falsely portraying the wartime leader urging his military to surrender to invading Russian forces.

"Uncertainty erodes our traditional understanding of deterrence by undermining our calculus of capability, our assurance of credibility, and our clarity of communication," General Campbell declared.

"Uncertainty is the bedfellow of timidity, the perfect foundation from which others may win without fighting," he added.

General Campbell nominated China's People's Liberation Army (PLA) as an expert proponent of psychological, legal and information warfare techniques that could disrupt democratic societies and undermine their will to fight.

"While these operations are, of course, not new phenomena, informatic disruption is exponentially, instantaneously and globally enhancing the prevalence and effectiveness of a three-warfares approachby any reasonably sophisticated practitioner," he said.

"Such an approach may bypass the need for a physical attack and strike directly at the psychological, changing perceptions of reality, with profound implications for deterrence".

During his address, General Campbell also warned of future crises involving food and water security, as well as waves of migration due to climate change.

"This disruption is happening fasterand less predictably than we all hoped," General Campbell told his Canberra audience.

"Without the global momentum needed, we may all be humbled by a planet made angry by our collective neglect."

See the rest here:

Defence force chief Angus Campbell warns deepfakes and AI will drive era of 'truth decay' - ABC News

Read More..

Driving the future of mobility with SenseAuto: The AGI Power Behind … – Automotive News Europe

SenseAuto is a leading global provider of artificial general intelligence (AGI) technology for the smart auto era. By integrating intelligent cabin, intelligent driving and AI cloud, SenseAuto empowers the next-generation mobility with its full-stack AGI capabilities to create a safer, smarter, and more enjoyable third living space experience.

Its product portfolio includes the vision-based Driver Monitoring System, Occupant Monitoring System, Near-Field Monitoring System, Innovative Cabin App, Cabin Brain as well as the ADAS offerings for pilot driving and parking.

SenseAuto is committed to upholding high industry standards to ensure a safe and seamless journey for all users. The Company has obtained the ASPICE L2, ISO 26262 ASIL B and ASIL D, ISO9001 and ISO/SAE 21434 certificates, along with other certificates for security and quality management.

With extensive experience in mass production, SenseAuto has established successful partnerships with over 30 renowned car manufacturers worldwide including Chery, GAC, Great Wall, HiPhi, NIO, SAIC, and ZEEKR. SenseAuto is the designated supplier for more than 36 million vehicles accumulatively, covering over 160 diverse models. The Company has active R&D presence in China (Shanghai, Beijing, Shenzhen and Guangzhou), Germany and Japan.

For more information, please visit SenseAutos website and LinkedIn page.

Read the original here:

Driving the future of mobility with SenseAuto: The AGI Power Behind ... - Automotive News Europe

Read More..

An Introduction To Diffusion Models For Machine Learning: What … – Dataconomy

Diffusion models owe their inspiration to the natural phenomenon of diffusion, where particles disperse from concentrated areas to less concentrated ones. In the context of artificial intelligence, diffusion models leverage this idea to generate new data samples that resemble existing data. By iteratively applying a noise schedule to a fixed initial condition, diffusion models can generate diverse outputs that capture the underlying distribution of the training data.

The power of diffusion models lies in their ability to harness the natural process of diffusion to revolutionize various aspects of artificial intelligence. In image generation, diffusion models can produce high-quality images that are virtually indistinguishable from real-world examples. In text generation, diffusion models can create coherent and contextually relevant text that is often used in applications such as chatbots and language translation.

Diffusion models have other advantages that make them an attractive choice for many applications. For example, they are relatively easy to train and require minimal computational resources compared to other types of deep learning models. Moreover, diffusion models are highly flexible and can be easily adapted to different problem domains by modifying the architecture or the loss function. As a result, diffusion models have become a popular tool in many fields of artificial intelligence, including computer vision, natural language processing, and audio synthesis.

Diffusion models take their inspiration from the concept of diffusion itself. Diffusion is a natural phenomenon in physics and chemistry, where particles or substances spread out from areas of high concentration to areas of low concentration over time. In the context of machine learning and artificial intelligence, diffusion models draw upon this concept to model and generate data, such as images and text.

These models simulate the gradual spread of information or features across data points, effectively blending and transforming them in a way that produces new, coherent samples. This inspiration from diffusion allows diffusion models to generate high-quality data samples with applications in image generation, text generation, and more.

The concept of diffusion and its application in machine learning has gained popularity due to its ability to generate realistic and diverse data samples, making them valuable tools in various AI applications.

There are four different types of diffusion models:

GANs consist of two neural networks: a generator network that generates new data samples, and a discriminator network that evaluates the generated samples and tells the generator whether they are realistic or not.

The generator and discriminator are trained simultaneously, with the goal of improving the generators ability to produce realistic samples while the discriminator becomes better at distinguishing between real and fake samples.

VAEs are a type of generative model that uses a probabilistic approach to learn a compressed representation of the input data. They consist of an encoder network that maps the input data to a latent space, and a decoder network that maps the latent space back to the input space.

During training, the VAE learns to reconstruct the input data and generate new samples by sampling from the latent space.

Normalizing flows are a type of generative model that transforms the input data into a simple probability distribution, such as a Gaussian distribution, using a series of invertible transformations. The transformed data is then sampled to generate new data.

Normalizing flows have been used for image generation, music synthesis, and density estimation.

Autoregressive models generate new data by predicting the next value in a sequence, given the previous values. These models are typically used for time-series data, such as stock prices, weather forecasts, and language generation.

Diffusion models are based on the idea of iteratively refining a random noise vector until it matches the distribution of the training data. The diffusion process involves a series of transformations that progressively modify the noise vector, such that the final output is a realistic sample from the target distribution.

The basic architecture of a diffusion model consists of a sequence of layers, each of which applies a nonlinear transformation to the input noise vector. Each layer has a set of learnable parameters that determine the nature of the transformation applied.

The symbiotic dance of technology and art

The output of each layer is passed through a nonlinear activation function, such as sigmoid or tanh, to introduce non-linearity in the model. The number of layers in the model determines the complexity of the generated samples, with more layers resulting in more detailed and realistic outputs.

To train a diffusion model, we first need to define a loss function that measures the dissimilarity between the generated samples and the target data distribution. Common choices for the loss function include mean squared error (MSE), binary cross-entropy, and log-likelihood. Next, we optimize the model parameters by minimizing the loss function using an optimization algorithm, such as stochastic gradient descent (SGD) or Adam. During training, the model generates samples by iteratively applying the diffusion process to a random noise vector, and the loss function calculates the difference between the generated sample and the target data distribution.

One advantage of diffusion models is their ability to generate diverse and coherent samples. Unlike other generative models, such as Generative Adversarial Networks (GANs), diffusion models do not suffer from mode collapse, where the generator produces limited variations of the same output. Additionally, diffusion models can be trained on complex distributions, such as multimodal or non-Gaussian distributions, which are challenging to model using traditional machine learning techniques.

Diffusion models have numerous applications in computer vision, natural language processing, and audio synthesis. For example, they can be used to generate realistic images of objects, faces, and scenes, or to create new sentences and paragraphs that are similar in style and structure to a given text corpus. In audio synthesis, diffusion models can be employed to generate realistic sounds, such as speech, music, and environmental noises.

There have been many advancements in diffusion models in recent years, and several popular diffusion models have gained attention in 2023. One of the most notable ones is Denoising Diffusion Models (DDM), which has gained significant attention due to its ability to generate high-quality images with fewer parameters compared to other models. DDM uses a denoising process to remove noise from the input image, resulting in a more accurate and detailed output.

Another notable diffusion model is Diffusion-based Generative Adversarial Networks (DGAN). This model combines the strengths of diffusion models and Generative Adversarial Networks (GANs). DGAN uses a diffusion process to generate new samples, which are then used to train a GAN. This approach allows for more diverse and coherent samples compared to traditional GANs.

Probabilistic Diffusion-based Generative Models (PDGM) is another type of generative model that combines the strengths of diffusion models and Gaussian processes. PDGM uses a probabilistic diffusion process to generate new samples, which are then used to estimate the underlying distribution of the data. This approach allows for more flexible modeling of complex distributions.

Non-local Diffusion Models (NLDM) incorporate non-local information into the generation process. NLDM uses a non-local similarity measure to capture long-range dependencies in the data, resulting in more realistic and detailed outputs.

Hierarchical Diffusion Models (HDM) incorporate hierarchical structures into the generation process. HDM uses a hierarchy of diffusion processes to generate new samples at multiple scales, resulting in more detailed and coherent outputs.

Diffusion-based Variational Autoencoders (DVAE) are a type of variational autoencoder that uses a diffusion process to model the latent space of the data. DVAE learns a probabilistic representation of the data, which can be used for tasks such as image generation, data imputation, and semi-supervised learning.

Two other notable diffusion models are Diffusion-based Text Generation (DTG) and Diffusion-based Image Synthesis (DIS).

DTG uses a diffusion process to generate new sentences or paragraphs, modeling the probability distribution over the words in a sentence and allowing for the generation of coherent and diverse texts.

DIS uses a diffusion process to generate new images, modeling the probability distribution over the pixels in an image and allowing for the generation of realistic and diverse images.

Diffusion models are a powerful tool in artificial intelligence that can be used for various applications such as image and text generation. To utilize these models effectively, you may follow this workflow:

Gather and preprocess your dataset to ensure it aligns with the problem you want to solve.

This step is crucial because the quality and relevance of your training data will directly impact the performance of your diffusion model.

Keep in mind when preparing your dataset:

Choose an appropriate diffusion model architecture based on your problem.

There are several types of diffusion models available, including VAEs (Variational Autoencoders), Denoising Diffusion Models, and Energy-Based Models. Each type has its strengths and weaknesses, so its essential to choose the one that best fits your specific use case.

Here are some factors to consider when selecting a diffusion model architecture:

Train the diffusion model on your dataset by optimizing model parameters to capture the underlying data distribution.

Training a diffusion model involves iteratively updating the model parameters to minimize the difference between the generated samples and the real data.

Keep in mind that:

Once your model is trained, use it to generate new data samples that resemble your training data.

The generation process typically involves iteratively applying the diffusion process to a noise tensor.

Remember when generating new samples:

Depending on your application, you may need to fine-tune the generated samples to meet specific criteria or constraints.

Fine-tuning involves adjusting the generated samples to better fit your desired output or constraints. This can include cropping, rotating, or applying further transformations to the generated images.

Dont forget:

Evaluate the quality of generated samples using appropriate metrics. If necessary, fine-tune your model or training process.

Evaluating the quality of generated samples is crucial to ensure they meet your desired standards. Common evaluation metrics include peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and human perception scores.

Here are some factors to consider when evaluating your generated samples:

Integrate your diffusion model into your application or pipeline for real-world use.

Once youve trained and evaluated your diffusion model, its time to deploy it in your preferred environment.

When deploying your diffusion model:

Diffusion models hold the key to unlocking a wealth of possibilities in the realm of artificial intelligence. These powerful tools go beyond mere functionality and represent the fusion of science and art, as data metamorphoses into novel, varied, and coherent forms. By harnessing the natural process of diffusion, these models empower us to create previously unimaginable outputs, limited only by our imagination and creativity.

Featured image credit: svstudioart/Freepik.

Continue reading here:
An Introduction To Diffusion Models For Machine Learning: What ... - Dataconomy

Read More..

Industrial AI Challenges and the Path Forward – ARC Advisory Group

In the ever-evolving landscape of Artificial Intelligence (AI), generative AI has emerged as a key player, promising to revolutionize industries and business processes. However, it's not without its share of complexities and confusion. To shed light on this innovative technology, I committed to explore the latest breakthroughs and their applicability to Industrial AI with a set of ARC blogs, podcasts, Insights and Reports. However with the general confusion being created by generative AI myths and misconceptions, I've worked with the team at ARC to lay some AI foundations, including a glossary of terms, in our initial report on The Industrial AI (R)Evolution.

Industrial AI, a subset of the broader field of artificial intelligence (AI), refers to the application of AI technologies (including Generative AI) in industrial settings to augment the workforce in pursuit of growth, profitability, more sustainable products and production processes, enhanced customer service, and business outcomes. Industrial AI leverages machine learning, deep learning, neural networks, and other approaches. Some of these techniques have been used for decades to build AI systems using data from various sources within an industrial environment, such as sensors, machinery, industrial engineers, and frontline workers.

Among these AI techniques and technologies, generative AI has caught the attention of many, particularly within various industries such as Aerospace & Defense, Automotive, Electric Power & Smart Grid, Industrial Equipment, Oil & Gas, Semi-Conductors, and more.

Generative AI, powered by machine learning and neural networks used for decades in various Industrial AI use cases, but with new genuine breakthroughs in natural language processing (using GANs, transformers and LLMs), are revolutionizing how we interact with everything around us, whether those technologies are inherently smart, industrialized, or not.

However, this revolutionary technology often leads to confusion due to general media hype about AI, extravagant marketing claims from software suppliers struggling to get due credit for having invested in AI technologies long before the current wave of generative AI breakthroughs, its technical complexity and the rapid pace at which AI continues to evolve.

The current state of industrial AI presents a complex picture. On one hand, we have a myriad of AI solutions promising to revolutionize processes and boost efficiency. On the other hand, there is a lack of clarity regarding which technologies truly deliver on these promises.

One of the main challenges faced by organizations is discerning valuable AI breakthroughs from the hype. While many AI technologies have proven their worth, others are still emerging, and their long-term value remains uncertain.

Moreover, there are numerous myths and misconceptions surrounding which of these AI techniques and technologies relevant to industrial AI use cases. These include the belief that AI implementation requires massive upfront investment or that it will lead to widespread job displacement. Such misconceptions often deter organizations from exploring the potential benefits of AI.

To address this confusion, the ARC Advisory Group has embarked on a mission to simplify the complex, identify relevant breakthroughs, and cut through the hype surrounding industrial AI. Central to this mission is The Industrial AI (R)Evolution that cuts through the Generative AI hype, dispels myths and summarizes the latest developments and trends in the field.

The report covers a wide range of topics, including data governance, cybersecurity risks, high-value industrial AI use cases, and the societal impact of AI. Additionally, it dives into the intricacies of various AI techniques, including unsupervised, semi-supervised, supervised, and reinforcement learning, as well as Large Language Models (LLMs) and Foundation Models (FMs).

The Industrial AI Report also dispels myths and misconceptions. One common myth is that AI implementation requires substantial upfront investment. While initial costs can be high, the long-term benefits often outweigh these costs. Another prevalent misconception is that AI will eliminate jobs. However, while AI may automate certain tasks, it also creates new roles and opportunities.

It discusses the shift in priorities from Industrial Metaverse to Industrial AI, emphasizing the potential of AI to drive efficiency and innovation in industries. For more on this particular topic read my blog on how Industrial AI is paving the way for Industrial Metaverse(s).

Industrial organizations can leverage ARC's Industrial AI Impact Assessment Model used by ARC's own team of Analysts, to guide their own AI evaluation and implementation process. This model offers a structured approach to assess the potential impact of AI on various aspects of the organization, including operations, strategy, and workforce.

As we continue to explore the potential of generative AI and other breakthroughs in industrial AI, collaboration and knowledge sharing become increasingly important. We invite you to join us in this journey, sharing your questions, experiences, learnings, and solutions.

The future of Industrial AI is promising, with its potential to transform industries and societal structures. By deepening our understanding and effectively applying AI technologies, we can unlock their true potential in the industrial realm.

For more information or to contribute to Industrial AI research, please contact Colin Masson at cmasson@arcweb.com.

Excerpt from:

Industrial AI Challenges and the Path Forward - ARC Advisory Group

Read More..

Machine-learning model predicts CKD progression with ‘readily … – Healio

September 14, 2023

1 min read

ADD TOPIC TO EMAIL ALERTS

Receive an email when new articles are posted on

Back to Healio

A machine-learning model developed by researchers at Sonic Healthcare USA accurately predicted the progression of chronic kidney disease using readily available laboratory data.

CKD is a major cause of morbidity and mortality, Joseph Aoki, MD, senior vice president, population health, at the Austin, Texas-based company, and colleagues wrote in a study. While more research is needed, our results support clinical utility of the model to improve timely recognition and optimal management for patients at risk for CKD progression.

The investigators used a retrospective observational trial to analyze deidentified laboratory information services data from a large U.S. outpatient laboratory network. It involved 110,264 adultswith initial eGFR values between 15 mL/min/1.73 m2 and 89 mL/min/1.73 m2.

Researchers developed a seven-variable risk classifier model using random forest survival methods to predict eGFR decline of more than 30% within 5 years.

Results showed that the risk classifier model accurately predicted eGFR decline greater than 30% and achieved an area under the curve receiver-operator characteristic of 0.85.

The most important predictor of progressive decline in kidney function was the eGFR slope, the authors wrote, followed by the urine albumin-creatinine ratio and serum albumin slope. Other key contributors to the model included initial eGFR, age and sex.

Our progressive CKD classifier accurately predicts significant eGFR decline in patients with early, mid and advanced disease using readily obtainable laboratory data.

The authors wrote that the study did have limitations: It did not evaluate the role of clinical variables such as blood pressure on the performance of the model. Further prospective work is warranted to validate the findings and assess the clinical utility of the model, the researchers wrote.

Used as a complement to and in conjunction with other well-established predictive models, the progressive CKD risk classifier has the potential to significantly improve timely recognition, risk stratification and optimal management for a heterogeneous population with CKD at a much earlier stage for intervention, Aoki and colleagues wrote.

ADD TOPIC TO EMAIL ALERTS

Receive an email when new articles are posted on

Back to Healio

See more here:
Machine-learning model predicts CKD progression with 'readily ... - Healio

Read More..

Here are the Top 3 AI Crypto Coins for 2023 SingularityNET … – Cryptonews

Image by Tara Winstead

Artificial Intelligence (AI) and cryptocurrencies are two technologies that encapsulate the spirit of our times. It was only a matter of time before these two domains intersected, giving rise to AI crypto.

These specialized tokens function as the operational fuel for AI platforms built on blockchain technology. By spending the tokens, users can access and benefit from the integrated AI features of the platforms.

In this article, we will explore three AI crypto projectsSingularityNET, Ocean Protocol, and yPredictthat show promise in effectively merging these two dynamic fields.

Visit yPredict Here

SingularityNET offers a decentralized AI marketplace, using blockchain technology to give open access to various AI algorithms and tools. The platform aspires to develop artificial general intelligence (AGI), a form of AI that can perform multiple tasks rather than just specialized ones.

Founded by Dr. Ben Goertzel and Dr. David Hanson in 2017, the latter also runs Hanson Robotics, known for creating the humanoid robot Sophia. SingularityNET operates on both the Ethereum and Cardano blockchains, allowing developers to share their AI services for public or corporate use. The native currency of the platform is AGIX, which is used for internal transactions.

SingularityNET was among the early initiatives to combine AI and cryptocurrency technologies. By utilizing blockchain, it strives to make AI more accessible and foster a community where developers can collaborate and enhance their AI services.

The platform currently hosts over 70 AI services, developed by a global community of creators. These services range from real-time voice cloning to image generation, and they are designed to be user-friendly, so even those not well-versed in AI can easily use them.

Smart contracts play a pivotal role in SingularityNET, making transactions fair and straightforward. These self-executing contracts outline terms and conditions for users who want to access a particular AI service. This not only simplifies the transaction process but also allows developers to focus on what they do bestinnovating and refining AI systems.

Being decentralized, SingularityNET ensures that no single party can exert undue control or restrict access to AI services. Prices for these services are automatically set and enforced by smart contracts.

AGIX serves multiple purposes within SingularityNET, such as:

Although initially designed for a single blockchain, AGIX has evolved to be compatible with multiple blockchain systems, including Cardano, Polygon, and Binance Smart Chain.

Ocean Protocol is a blockchain-based platform designed to facilitate the exchange between data providers and consumers. Established in 2017, the platform makes data a tradeable asset through a unique tokenization process.

On Ocean Protocol, data sets and services are converted into ERC-20 tokens, which are based on the Ethereum blockchain. This allows data providers to securely sell access to their data assets.

The tokenization model of Ocean Protocol offers a streamlined and secure way for individuals and businesses to monetize their data. As a result, the platform hosts an expansive data marketplace where data analysts, researchers, and scientists can easily acquire valuable data sets. This capability is especially relevant for sectors that rely heavily on data, such as artificial intelligence.

The platform's core team has a strong background in big data and AI, adding to the platform's credibility and focus. Ocean Protocol's native currency, OCEAN, is the primary medium for transactions on the platform and enables community governance. The token also offers staking rewards, making it multi-functional within the Ocean Protocol ecosystem.

The total supply of OCEAN tokens is capped at 1.41 billion, with roughly 613 million in circulation.

Though yPredict is still in the presale phase, it has already gained substantial attention. The platform, which operates on the Polygon Matic chain, has raised more than $3.83 million towards its $4.6 million goal.

yPredict's key offering will be a prediction marketplace. Here, financial data scientists can monetize their predictive models by offering them as subscription services. Traders can then subscribe to these services using YPRED tokens to get valuable market forecasts.

The YPRED tokens will also be used for other functionalities, like cryptocurrency analysis and access to data-driven insights. Token holders will have the option to stake their tokens in high-yield pools.

The total supply of YPRED tokens is capped at 100 million, with 80 million set aside for the presale. The remaining are allocated for liquidity and development. YPRED tokens will also enable holders to participate in governance activities within the yPredict platform.

yPredict is also planning to offer analytical tools, such as pattern recognition, sentiment analysis, and transaction analytics. The platform is even working on an AI-powered backlink estimator. Initially free, this feature has now been priced at $99 per query due to high demand.

In the current technological landscape where AI and cryptocurrency are increasingly becoming central, the fusion of these domains in the form of AI crypto is a development worth noting. These tokens serve as a bridge, allowing users to tap into AI capabilities via blockchain platforms. Each project offers a unique approach to integrating AI and blockchain, expanding the possibilities of what these technologies can achieve when combined.

Visit yPredict Here

Disclaimer: Crypto is a high-risk asset class. This article is provided for informational purposes and does not constitute investment advice. You could lose all of your capital.

Link:

Here are the Top 3 AI Crypto Coins for 2023 SingularityNET ... - Cryptonews

Read More..

Machine learning improves credit card fraud detection by over 94 … – Arab News

RIYADH: Machine learning algorithms could enhance credit card fraud detection by over 94 percent, according to a new study by theArab Monetary Fund.

According to the report, artificial intelligence plays a crucial role in strengthening credit card fraud detection, and machine learning predicts fraudulent transactions to a large extent.

Global credit card losses due to credit card fraud incurred by financial institutions and individuals hit $32.3 billion in 2021, representing a substantial rise of 13.8 percent compared to the previous year.

AMF, in its report, also urged intensified innovation and collaboration with top financial technology firms to develop ML-based fraud detection systems.

It also highlighted the vitality of using AI and ML to analyze credit card fraud in Arab nations.

The situation is also getting more challenging because of the increasing credit card penetration in the region.

Saudi card payments

In May, London-based data and analytics firm GlobalData reported that Saudi Arabias card payments market is expected to grow by 14.6 percent to reach SR532.1 billion ($141.9 billion) in 2023, driven by contactless payments and the governments push for an digitizedsociety.

The study found that card payment value in the Kingdom registered an annual growth of 29.8 percent in 2021 and 17.3 percent in 2022 thanks toimproving economic conditions and a rise in consumer spending.

While cash has traditionally been the preferred payment method in Saudi Arabia, its usage is on the decline in line with the rising consumer preference for electronic payments, said Ravi Sharma, lead banking and payments analyst, GlobalData, in a statement released in May.

Stringent regulations

The increasing utility has also spurred a rise in government regulations to prevent financial fraud across the region.

In July, Dubai Public Prosecution announced a clampdown on thoseforging, counterfeiting or reproducing debit and credit cards and warned that action from fraudsters will result in imprisonment and fines which range from 500,000 dirhams ($136,127) to 2 million dirhams.

Forging or counterfeiting or reproducing a credit card or debit card or any other electronic payment method by using any information technology means or computer program shall expose to imprisonment and fine not less than 500,000 dirhams and not over 2 million dirhams or either of these two penalties, said Dubai Public Prosecution.

Earlier this year, Saudi Arabia also announced a $1.3 million fine and five-year jail time for forgery for anyone who forges any electronic signature, records, or digital certificate or uses these documents while knowing they are fake.

Link:
Machine learning improves credit card fraud detection by over 94 ... - Arab News

Read More..

Accubits, Bud Ecosystem open-source Large Language Model, drive it among global top – BusinessLine

Thiruvananthapuram-based Accubits Technologies has open-sourced GenZ 70B, a Large Language Model (LLM), which is now among the top listed onHuggingFaces leaderboard, aglobal platform that curates, evaluates, and compares AI models.

A 70-billion-parameter fine-tuned model, it is ranked number one on the HuggingFace leaderboard for instruction-tuned LLMs and sixth for open LLMs in all categories. It was open-sourced collaboratively with Bud Ecosystem, a separate Accubits company, says Aharsh MS, Chief Marketing Officer, Accubits Technologies. Bud focuses on fundamental research in artificial general intelligence (AGI) and behavioural science., and is building an ecosystem around multi-modal, multi-task foundational models.

An LLM (for instance, GPT4 by Open AI) is a type of machine learning model specifically designed for processing and generating human-like text based on vast amounts of textual data. GPT-4 is the largest model in OpenAIs GPT series, released this year. Its parameter count has not been released to the public, though it is speculated that the model has more than 1.7 trillion.

An LLM model from India ranking top on global scale is significant, and can be an inspiration for the local developer community, says Aharsh MS. GenZ is an auto-regressive language model with an optimised transformer architecture. We fine-tuned the model with curated datasets using the Supervised Fine-Tuning (SFT) method, Aharsh explained to businessline.

It used OpenAssistants instruction fine-tuning dataset and Thought Source for the Chain Of Thought (CoT) approach. With extensive fine-tuning, it has acquired additional skills and capabilities beyond what a pre-trained model can offer. Aharsh offered deeper insight into the world of natural language processing computer programmes in an interview.

Excerpts:

Are Generative AI and LLMs the same thing?

No. LLMs fall under the umbrella of generative AI, the reverse isnt true. Not every generative AI model is an LLM. The difference primarily hinges on the type of content a model is designed to produce and its specialised applications. Generative AI refers to a broader class of AI models designed to generate new content. This creation capability isnt restricted solely to text; it spans a diverse array of outputs, including images, music compositions, and even videos. On the other hand, LLMs represent a specific subset within the generative AI spectrum. These models are meticulously designed and optimised for tasks related to language. Trained on immense volumes of text data, LLMs excel in generating coherent and contextually apt textual outputs. This might range from crafting detailed paragraphs to answering intricate questions or even extending given textual prompts.

Why did youopen-source the model?

Accubits and Bud Ecosystem worked on the GenZ 70B suite of open-source LLMs to democratise access to Generative AI-based technologies. We believe that Generative AI is one of the most disruptive technologies, perhaps more significant than the invention of fire itself. Such a technology must be freely available for everyone to experiment and innovate.

With this objective in mind, we are open-sourcing models that can be hosted even on a laptop. GenZs GTPQ & GGML-based models can be hosted on a personal laptop without a GPU. Bud Ecosystem has its own proprietary multi-modal, multi-task models, which is used to build its own products. Accubits is already helping its customers adopt Generative AI-based technologies at scale, helping them build RoI-driven products and solutions.

How do you look to stay ahead of fine-tuning models being extensively released now?

The training data used and our fundamental research on attention mechanisms, model alignment, consistency, and reliability have enabled us to build GenAI models with good performance. Most fine-tuned models do not offer commercial licenses. Which means, businesses do not have the freedom to use them for building commercial applications. GenZ 70B stands out mainly for two reasons: one, it offers a commercial license, and two, it offers good performance. Our model is primarily instruct-tuned for better reasoning, role play and writing capabilities, making it more suitable for business applications.

Are there any limitations to the model?

Like any Large Language Model, GenZ also carries risks. We recommend users to consider fine-tuning it for specific set of tasks of interest. Appropriate precautions and guardrails should be taken for any production use. Using GenZ 70B responsibly is key to unlocking its full potential while maintaining a safe and respectful environment.

Have you looked at how it can replicate the work of models such as ChatGPT?

GenZ 70Bs performance is impressive, especially in relation to its size. The GenZ 70B model scored 7.34 on the MT-Bench benchmark, which is close to GPT 3.5s score of 7.94. Considering that GenZ 70B is 2.5 times smaller than ChatGPT 3.5 yet nearly matches its performance, Id say its surprisingly efficient for its size. The model size is critical when considering real-world commercial use cases. Smaller models are usually easier to work with, use less computing power, and can be much more budget-friendly. GenZ can offer performance on par with GPT 3.5 in a much smaller package, making it very suitable for content creation.

What is the rate of accuracy of LLMs with respect to low-resource or distant languages (other than English)?

LLMs thrive on quality and quantity of training data. Since English is a dominant language on the internet, many of these are trained extensively on English data. This results in high accuracy when dealing with English language tasks, from simple text generation to more complex problem-solving.

In contrast, accuracy level might be different with less common or less-studied languages, primarily because of relative scarcity of quality training data. Its worth noting the inherent capabilities of LLMs are not restricted to English or any specific language. If provided with extensive and diverse training data, an LLM can achieve better accuracy for a less common language. In essence, the performance in any given language is reflective of the amount and quality of the training data.

(E.o.m.)

See the article here:

Accubits, Bud Ecosystem open-source Large Language Model, drive it among global top - BusinessLine

Read More..

What will the rise of AI really mean for schools? – TES

Advances in artificial intelligence (AI) are accelerating at breakneck speed. Systems like ChatGPT are approaching, and by some measures exceeding, human-level performance in many domains.

But what does the growth of these systems mean for schools?

We see tremendous potential for these technologies to enhance and augment human capabilities, making us smarter, more efficient and able to solve problems that currently seem impossible to manage.

However, we also see significant downsides. Without thoughtful intervention, AI could diminish human agency, stifle creativity and potentially stunt our collective progress.

Nowhere are the stakes higher than in education. Schools and universities have helped generations climb the ladder of knowledge and skills. But if machines can soon out-think us, whats the point of learning? Why invest time and effort acquiring expertise that could soon be obsolete?

To explore these questions, we recently co-authored a paper analysing the staggering pace of progress in AI and the potential implications for education.

Systems like GPT-4 are already scoring higher than well over 90 per cent of humans on academic tests of literacy and quantitative skills. Many experts predict AI will reach human-level reasoning across all domains in the next decade or two.

Once achieved, these artificial general intelligence systems could quickly exceed the combined brainpower of every person who has ever lived.

Faced with these exponential advances, how might society respond? We foresee four possible scenarios, all of which would have different implications for schools:

One option is that governments recognise the risks and halt further AI development, through regulation or restricting hardware supply. This might slow things down and buy some time.

Bans are hard to enforce, often porous and would mean that we would forfeit many potential benefits that carefully governed AI systems could bring. However, if AI advances get curtailed at, say, GPT-4.5 there is a greater chance that humanity stays in the driving seat and we still benefit from education.

In fact, with suitable guardrails, many of the recent AI advances might greatly accelerate our thinking skills, for example by providing high-quality supplementary AI tuition to all students and by acting as a digital personal assistant to teachers.

A second pathway is that AI takes over most jobs, but legislation forces companies to keep employing humans alongside the machines, in largely ceremonial roles. The risk here is that this fake work risks infantilising people.

As AI thinking accelerates, our stunted contributions could create bottlenecks, leaving us disempowered spectators rather than active participants.

This pathway also requires only a basic level of education - we would simply need to turn up and read out the script displayed in our AI glasses. After all, our own thinking and words would never exceed the abilities of the machines.

Wanting to remain competitive, some might opt to biologically or digitally upgrade their brains through gene editing or neural implants. This might sound like science fiction, but is not beyond the realm of possibility - and such a scenario would have profound implications for education.

We might be able to literally download new knowledge, skills and abilities in milliseconds. No more need for schooling.

But in making ourselves more machine-like, would we risk losing our humanity?

A final scenario is that we accept economic irrelevance and embrace universal basic income - paid for by taxing the fruits of AI labour. Freed from work, people would focus on sports, hobbies, rituals and human connections.

But devoid of productive purpose, might we lose our vital force and struggle to get out of bed in the morning?

All these paths are, in different ways, problematic. So, before we sleepwalk into one, we need urgent debate on the destination we want.

Our paper offers 13 pragmatic proposals to regulate and slow down AI, to buy time for this discussion by, for example: requiring frontier AI models to be government licensed before their release; making it illegal for systems to impersonate humans; implementing guardrails to stop AI systems from giving students the answers; and making system developers accountable for untruths, harms and bad advice generated by their systems.

At the same time, we must also re-examine educations role in society. If humans can add only marginal value working alongside AI, schools may need to pivot from preparation for employment to nurturing distinctly human traits: ethics, empathy, creativity, playfulness and curiosity.

As AI excels at information retrieval and analysis, we must double down on contextual reasoning, wisdom, judgement and morality. However, even here, we must be realistic that (eventually) AI is likely to be able to emulate all these human traits as well.

Some skills like literacy might also become less essential - for example, if we can learn through verbal discourse with AI or by porting into realistic simulations.

Yet foundational knowledge will likely remain crucial, enabling us to meaningfully prompt and critique AI. And direct instruction, whether by teacher or AI, will still help students to grasp concepts more quickly than trial-and-error discovery. We must, therefore, identify the irreducible core of timeless human competencies to pass on.

None of this is preordained. With vigilance, foresight and governance, AI can uplift humanity in the same way that prior innovations have. But we must act decisively. Timelines are highly uncertain. AI capabilities could exceed our own in a decade or two. Either way, the hinge point of history is now.

We hope these proposals stimulate urgent debate on the society and education system we want to build - before the choice is made for us.

Dylan Wiliam is emeritus professor of educational assessment at the UCL Institute of Education. John Hattie is emeritus laureate professor of education at the University of Melbourne. Arran Hamilton is group director, education, at Cognition Learning Group. His contributions had editorial support from Claude AI

Go here to read the rest:

What will the rise of AI really mean for schools? - TES

Read More..