Page 3,393«..1020..3,3923,3933,3943,395..3,4003,410..»

Combatting COVID-19 misinformation with machine learning (VB Live) – VentureBeat

Presented by AWS Machine Learning

As machine learning has evolved, so have best practices, especially in the wake of COVID-19. Join this VB Live event to learn from experts about how machine learning solutions are helping companies respond in these uncertain times and the lessons learned along the way.

Register here for free.

Misinformation around COVID-19 is driving human behavior across the world. Here in the information age, sensationalized clickbait headlines are crowding out actual fact-based content, and, as a result misinformation spreads virally. Conversations within small communities become the epicenter of false information, and that misinformation spreads as people talk, both online and off. As the number of misinformed people grow, this infodemic grows.

The spread of misinformation around COVID-19 is especially problematic, because it could overshadow the key messaging around safety measures from public health and government officials.

In an effort to counter misinformed narratives in central and west Africa, Novetta Mission Analytics (NMA) is working with Africa CDC (Center for Disease Control) to discover and identify narratives and behavior patterns around the disease, says David Cyprian, product owner at Novetta. And machine learning is key.

They supply data that measures the acceptability, impact, and effectiveness of public health and social measures. In turn, the Africa CDC analysis of the data enables them to generate tailored guidelines for each country.

With all these different narratives out there, we can use machine learning to quantify which ones are really affecting the largest population, Cyprian explains. We uncover how quickly these things are spreading, how many people are talking about the issues, and whether anyone is actually criticizing the misinformation itself.

NMA uncovered trending phrases that indicate worry around the disease, mistrust about official messaging, and criticisms of local measures to combat the disease. They found that herbal remedies are becoming popular, as is the idea of herd immunity.

We know all of these different narratives are changing behavior, Cyprian says. Theyre causing people to make decisions that make it more difficult for the COVID-19 response community to be effective and implement countermeasures that are going to mitigate the effects of the virus.

To identify these narrative threads, Novetta ingests publicly-available social media at scale and pairs it with a collection of domestic and international news media. They process and analyze that raw social and traditional media content in their ML platform built on AWS to identify where people are talking about these things, and where events are happening that drive the conversations. They also use natural language processing for directed sentiment analysis to discover whether narratives are being driven by mistrust of a local government entity, the west, or international organizations, as well as identifying influencers that are engendering a lot of positive sentiment among users and building trust.

Pieces of content are tagged as positive or negative to local and global pandemic measures and public entities, creating small human-labeled data sets about specific micronarratives for specific populations that might be trading in misinformation.

By fusing rapid ingestion with a human labeling process of just a few hundred artifacts, theyre able to kick off machine learning and apply it to the scale of social media. This allows them to have more than one learning model that is used for all the problem sets.

We dont have a one-size-fits-all approach, says Cyprian. Were always tuning and researching accuracy for specific narratives, and then were able to provide large, near-real-time insights into how these narratives are propagating or spreading in the field.

Built on AWS, their machine learning architecture allows their development team to focus on what they do well, which is develop new applications and new widgets to be able to analyze this data.

They dont need to worry about any server management, or scaling, since thats taken care of for them with Amazon EC2 and S3. Their microservices architecture uses some additional features that Amazon offers, particularly Elastic Kubernetes Service (EKS), to orchestrate their services, and Amazon Elastic Container Registry (ECR), to store images and run vulnerability testing before they deploy.

Novettas approach is cross-disciplinary, bringing in domain experts from the health field, media analysts, machine learning research engineers, and software developers. They work in small teams to solve problems together.

In my experience, thats been the best way for machine learning to make a practical difference, he says. I would just urge folks who are facing these similar difficult problems to enable their people to do what people do well, and then have the machine learning engineers help to harden, verify, and scale those efforts so you can bring countermeasures to bear quickly.

To learn more about the impact machine learning solutions can deliver and lessons learned along the way, dont miss this round table with leaders from Kabbage and Novetta, as well as Michelle K. Lee, VP of the Amazon Machine Learning Solutions Lab.

Dont miss out!

Register here for free.

Youll learn:

Speakers:

Visit link:
Combatting COVID-19 misinformation with machine learning (VB Live) - VentureBeat

Read More..

Domino Data Lab Named a Leader in Notebook-Based Predictive Analytics and Machine Learning Evaluation by Global Research Firm – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Domino Data Lab, provider of the leading open enterprise data science management platform trusted by over 20% of the Fortune 100, has been named a Leader by Forrester Research in its The Forrester Wave: Notebook-based Predictive Analytics and Machine Learning (PAML), Q3 2020 report. As previously announced, Domino was also a Leader in the Q3 2018: Notebook-based Predictive Analytics and Machine Learning (PAML) Forrester Wave.

According to the Forrester report, the Domino Data Science Platform ...supports the diversity of [machine learning] ML options that users need in todays rapidly expanding PAML ecosystem, with repeatability, discipline and governance.

The report also notes that ...Domino tames the chaos, bringing all your different PAML tools together and binding them in a common, governed platform. Adding that Domino drives productivity by abstracting away infrastructure provisioning, managing clusters, tracking experiments, maintaining version control, and deploying and monitoring models. And it drives collaboration with built-in knowledge management tools and shared repositories for data, code, model artifacts, and apps irrespective of where they were developed.

The Domino data science platform was built to satisfy the needs of large enterprises with teams of code-first data scientists who demand collaboration, openness and reproducibility, backed by IT governance, security and compliancefor centralized data science at scale.

Were proud to be the platform that powers data science for leading enterprises. This gives us a front-row seat to see how companies like Bayer, Bristol-Myers Squibb, and Dell are using data science to solve the worlds most complex problems like fighting cancer or creating a vaccine for COVID, said Nick Elprin, CEO and co-founder at Domino Data Lab. Domino was built to drive the critical capabilities the worlds most advanced organizations need, and were delighted that Forrester has recognized us a Leader.

Domino Data Lab was evaluated in The Forrester Wave on 26 criteria across three categories: current offering, strategy and market presence, among 11 other vendors. In the evaluation, Domino received among the top scores in the ModelOps criterion, and received the highest scores possible in the criteria of collaboration, platform infrastructure, ability to execute, solution roadmap, and enablement.

A complimentary copy of this research report is available at dominodatalab.com.

About Domino Data Lab

Domino Data Lab empowers data science teams with the leading, open data science platform that enables enterprises to manage and scale data science with discipline and maturity. Model-driven companies including Allstate, Dell Technologies, and Bayer use Domino as a data science system of record to accelerate breakthrough research, increase collaboration, and rapidly deliver high-impact models. Founded in 2013 and based in San Francisco, Domino is backed by Sequoia Capital, Coatue, Bloomberg Beta, Dell Technologies Capital, Highland Capital Partners, and Zetta Venture Partners. For more information, visit dominodatalab.com.

Read the rest here:
Domino Data Lab Named a Leader in Notebook-Based Predictive Analytics and Machine Learning Evaluation by Global Research Firm - Business Wire

Read More..

This artist used machine learning to create realistic portraits of Roman emperors – The World

Some people have spent their quarantine downtime bakingsourdough bread. Others experiment with tie-dye. But others namely Toronto-based artist Daniel Voshart have createdpainstaking portraits of all 54 Roman emperors of the Principate period, which spanned from 27 BC to 285 AD.

The portraits help people visualize what the Roman emperors would have looked like when they were alive.

Included are Vosharts best artistic guesses of the faces of emperors Augustus, Nero, Caligula, Marcus Aurelius and Claudius, among others. They dont look particularly heroic or epic rather, they look like regular people, with craggy foreheads, receding hairlines and bags under their eyes.

To make the portraits, Voshart used a design software called Artbreeder, which relies on a kind of artificial intelligence called generative adversarial networks (GANs).

Voshart starts by feeding the GANs hundreds of images of the emperors collected from ancient sculpted busts, coins and statues. Then he gets a composite image, which he tweaks in Photoshop. To choose characteristics such as hair color and eye color, Voshart researches the emperors backgrounds and lineages.

It was a bit of a challenge, he says. About a quarter of the project was doing research, trying to figure out if theres something written about their appearance.

He also needed to find good images to feed the GANs.

Another quarter of the research was finding the bust, finding when it was carved because a lot of these busts are recarvings or carved hundreds of years later, he says.

In a statement posted on Medium, Voshartwrites: My goal was not to romanticize emperors or make them seem heroic. In choosing bust/sculptures, my approach was to favor the bust that was made when the emperor was alive. Otherwise, I favored the bust made with the greatest craftsmanship and where the emperor was stereotypically uglier my pet theory being that artists were likely trying to flatter their subjects.

Related:Battle of the bums: Museums complete over best artistic behinds

Voshart is not a Rome expert. His background is in architecture and design, and by day he works in the art department of the TV show "Star Trek: Discovery," where he designs virtual reality walkthroughs of the sets before they'rebuilt.

But when the coronavirus pandemic hit, Voshart was furloughed. He used the extra time on his hands to learn how to use the Artbreeder software.The idea for the Roman emperor project came from a Reddit threadwhere people were posting realistic-looking images theyd created on Artbreeder using photos of Roman busts. Voshart gave it a try and went into exacting detail with his research and design process, doing multiple iterations of the images.

Voshart says he made some mistakes along the way. For example, Voshart initially based his portrait of Caligula, a notoriously sadistic emperor, on a beautifully preserved bust in the Metropolitan Museum of Art. But the bust was too perfect-looking, Voshart says.

Multiple people told me he was disfigured, and another bust was more accurate, he says.

So, for the second iteration of the portrait, Voshart favored a different bust where one eye was lower than the other.

People have been telling me my first depiction of Caligula was hot, he says. Now, no ones telling me that.

Voshart says people who see his portraits on Twitter and Reddit often approach them like theyd approachTinder profiles.

I get maybe a few too many comments, like such-and-such is hot. But a lot of these emperors are such awful people!

I get maybe a few too many comments, like such-and-such is hot. But a lot of these emperors are such awful people! Voshart says.

Voshart keeps a list on his computer of all the funny comparisons people have made to present-day celebrities and public figures.

Ive heard Nero looks like a football player. Augustus looks like Daniel Craigmy early depiction of Marcus Aurelius looks like the Dude from 'The Big Lebowski.'

But the No. 1 comment? Augustus looks like Putin.

Related:UNESCO says scammers are using its logo to defraudartcollectors

No one knows for sure whether Augustus actually looked like Vladimir Putin in real life.Voshart says his portraits are speculative.

Its definitely an artistic interpretation, he says. Im sure if you time-traveled, youd be very angry at me."

Follow this link:
This artist used machine learning to create realistic portraits of Roman emperors - The World

Read More..

Demonstration Of What-If Tool For Machine Learning Model Investigation – Analytics India Magazine

Machine learning era has reached the stage of interpretability where developing models and making predictions is simply not enough any more. To make a powerful impact and get good results on the data it is important to investigate and probe the dataset and the models. A good model investigation involves digging deep into the understanding of the model to find insights and inconsistencies in the developed model. This task usually involves writing a lot of custom functions. But, with tools like What-If, it makes the probing task very easy and saves time and efforts for programmers.

In this article we will learn about:

What-If tool is a visualization tool that is designed to interactively probe the machine learning models. WIT allows users to understand machine learning models like classification, regression and deep neural networks by providing methods to evaluate, analyse and compare the model. It is user friendly and can be used not only by developers but also by researchers and non-programmers very easily.

WIT was developed by Google under the People+AI research (PAIR) program. It is open-source and brings together researchers across Google to study and redesign the ways people interact with AI systems.

This tool provides multiple features and advantages for users to investigate the model.

Some of the features of using this are:

WIT can be used with a Google Colab notebook or Jupyter notebook. It can also be used with Tensorflow Board.

Let us take a sample dataset to understand the different features of WIT. I will choose the forest fire dataset available for download on Kaggle. You can click here for downloading the dataset. The goal here is to predict the areas affected by forest fires given the temperature, month, amount of rain etc.

I will implement this tool on google collaboratory. Before we load the dataset and perform the processing, we will first install the WIT. To install this tool use,

!pip install witwidget

Once we have split the data, we can convert the columns month and day to categorical values using label encoder.

Now we can build our model. I will use sklearn ensemble model and implement the gradient boosting regression model.

Now that we have the model trained, we will write a function to predict the data since we need to use this for the widget.

Next, we will write the code to call the widget.

This opens an interactive widget with two panels.

To the left, there is a panel for selecting multiple techniques to perform on the data and to the right is the data points.

As you can see on the right panel we have options to select features in the dataset along X-axis and Y-axis. I will set these values and check the graphs.

Here I have set FFMC along the X-axis and area as the target. Keep in mind that these points are displayed after the regression is performed.

Let us now explore each of the options provided to us.

You can select a random data point and highlight the point selected. You can also change the value of the datapoint and observe how the predictions change dynamically and immediately.

As you can see, changing the values changes the predicted outcomes. You can change multiple values and experiment with the model behaviour.

Another way to understand the behaviour of a model is to use counterfactuals. Counterfactuals are slight changes made that can cause a model to flip its decision.

By clicking on the slide button shown below we can identify the counterfactual which gets highlighted in green.

This plot shows the effects that the features have on the trained machine learning model.

As shown below, we can see the inference of all the features with the target value.

This tab allows us to look at the overall model performance. You can evaluate the model performance with respect to one feature or more than the one feature. There are multiple options available for analysis of the performance.

I have selected two features FFMC and temp against the area to understand performance using mean error.

If multiple training models are used their performance can be evaluated here.

The features tab is used to get the statistics of each feature in the dataset. It displays the data in the form of histograms or quantile charts.

The tab also enables us to look into the distribution of values for each feature in the dataset.

It also highlights the features that are most non-uniform in comparison to the other features in the dataset.

Identifying non-uniformity is a good way to reduce bias in the model.

WIT is a very useful tool for analysis of model performance. Ability to inspect models in a simple no-code environment will be of great help especially in the business perspective.

It also gives insights to factors beyond training the model like understanding why and how that model was created and how the dataset is fitting in the model.

comments

See the article here:
Demonstration Of What-If Tool For Machine Learning Model Investigation - Analytics India Magazine

Read More..

RXA to Participate in 2nd Annual A2.AI Conference focused on Machine Learning & Applied AI – PR Web

Panelists from last years RXA event speak at the A2.AI session about how machine learning & applied AI enable businesses to make informed decisions with data. Tune in to this year's virtual event.

ANN ARBOR, Mich. (PRWEB) September 10, 2020

RXA, the international leader in applied artificial intelligence, advanced data science, and analytics allowing companies to make smarter, faster decisions, announced today that it will participate in the second annual A2.AI conference this time, virtually.

The virtual conference is the first of its kind in the Ann Arbor area, focusing on how machine learning and applied artificial intelligence enable businesses to make more informed and actionable decisions with their data. The conference will host leaders and entrepreneurs from companies including RXA, Weave Workforce and ESPN and will ignite new conversations in the artificial intelligence space.

Many businesses have not begun to utilize the power of applied AI, viewing it as something to tackle in the future. Through a series of speaker, panel and networking sessions, RXA and other industry leaders will share how applied AI, predictive analytics, and data visualization are changing how companies can tackle complex challenges during these unpredictable times, said Jason Harper, chief executive officer and founder, RXA. The exponentially rising influx of customer data has impacted every industry around the globe in various ways, and we have the ability using advanced technologies to better plan for the future of business if we can learn from and share best practices from other industry executives.

The conference will be held on September 23, 2020 from 12:00 p.m. 4:30 p.m. EDT. Roundtable speakers at the virtual event will include: Mike McFall, CEO and co-founder, BIGGBY COFFEE; LTG Reynold Hoover, retired LTG and principal, RNHoover Consulting LLC.; Doug Kramon, senior director of Fan Support & Customer Care Operations, ESPN; Charles Cantu, founder and CEO, RESET DIGITAL; Amy Klinke, senior director, Business Engagement Center, University of Michigan; Kristie Rowley, principal data scientist, manager of Data Science Professional Services, Domo and; Jason Harper, CEO and co-founder, RXA.

RXA featured speakers will include John Larson, co-founder and CEO of Weave Workforce https://www.weaveworkforce.com/ Jonathan Prantner, cofounder and chief analytics officer, RXA, Eric Green, chief executive officer, Ready Signal https://www.readysignal.com/ Heather Reed, chief experience officer, RXA and Tom Stanek, president RXA https://RXA.IO

For more information about the A2.AI event, please visit https://a2.ai

If you are interested in attending, please register here https://a2.ai/homepage/register/

RXARXA https://RXA.IO is a leading applied artificial intelligence and data science company founded in 2016 in Ann Arbor, MI. RXA has a diverse portfolio of services and solutions such as being a leading DOMO implementation and consulting firm, customized artificial intelligence kick-start programs, and an RXA Studio to support the development of new products and companies and proprietary solutions such as Mixed Media Optimization, Voice of Customer, and Workforce Optimization to help organizations improved their ROI and decision making while streamline operations.

RXAs solutions are currently being leveraged by over 70 different customers across North America, Asia, and Europe. RXA has been named the 2019 Innovative Partner of the Year, Voice of Customer Experience | Application of the Year for 2020, and 2020 Rising Star by Domo, Inc.

Share article on social media or email:

Continue reading here:
RXA to Participate in 2nd Annual A2.AI Conference focused on Machine Learning & Applied AI - PR Web

Read More..

50 Data Science and Analysts Jobs That Opened Just Last Week – Analytics India Magazine

Despite the pandemic, organisations and industries are looking to hire professionals for data science and analytics who are familiar with advanced statistical techniques, machine learning tools, among others. With the evolving Data Science and Analytics market, data scientists and AI practitioners should keep themselves abreast of the latest tools and trends in the field.

In this article, we list down 50 latest job openings in data science and analytics that opened just last week.

(The jobs are sorted according to the years of experience required).

Location: Bangalore

Skills Required: Machine learning, Python, data analysis, SQL, analytics, predictive modelling, natural language processing, deep learning, data science, etc.

Apply here.

Location: Bangalore

Skills Required: Deep learning frameworks such as TensorFlow for Keras, Python and basic libraries for machine learning such as sci kit-learn and pandas, visualising and manipulating big data sets, Open CV, Linux, etc.

Apply here.

Location: Bangalore

Skills Required: Business Intelligence and analysis, designing, testing, migration, production support and implement automation, visualisation tools like Power BI/Tableau, statistical packages for analysing datasets, Big data handling tools such as Python/R programming knowledge, etc.

Apply here.

Location: Telangana

Skills Required: Machine learning, text analytics, and statistical analysis methods, data mining, data wrangling, and data munging, R, Python, SAS, SPSS, Weka, etc.

Apply here.

Location: Mumbai

Skills Required: Customer segmentation, user profiling, churn analysis, Tableau, Looker, Qlik, SQL and MS Excel, R or Python, etc.

Apply here.

Location: Bangalore

Skills Required: Analytical skills, problem-solving skills, statistical computer languages (R, Python, SAS), statistical concepts, advanced machine learning techniques, optimisation techniques, distributed data/computing tools, etc.

Apply here.

Location: Hyderabad

Skills Required: Building and implementing predictive models using machine learning algorithms, Python, R, Hadoop and related Big Data technologies, etc.

Apply here.

Location: Remote

Skills Required: Python, R, AI/ML, data science techniques, programming, statistics, large datasets, etc.

Apply here.

Location: Hyderabad

Skills Required: Strong analytical background, instance writing scripts, web scraping, calling APIs, write SQL queries, Python scripting, business stakeholders data requirements, design and implement automation policies, etc.

Apply here.

Location: Bangalore

Skills Required: SQL and relational databases, workflows using Alteryx, Python, Hadoop and cloud computing like AWS, data acquisition and manipulation, modelling, and analysing data from core platforms (Eloqua, Salesforce, Enterprise Data Warehouse), etc.

Apply here.

Location: Bangalore

Skills Required: Statistics, Machine Learning, programming skills in various languages (Python, Scala, R), machine learning frameworks such as Scikit-Learn, H2O, Keras, TensorFlow, and Spark, Linux/OS X command line, version control software (git), and general software development, database, programming or scripting to enable ETL development, etc.

Apply here.

Location: Bangalore

Skills Required: C++, Object orientated programming and File-based Design, video surveillance domain and algorithms evaluation, Linux platform-based development, Python/Go programming, Kaffe, TensorFlow, JS Scripting, etc.

Apply here.

Location: Bangalore

Skills Required: Python, data manipulation, wrangling, cleansing, and analysis, large-scale datasets, Anaconda, Rstudio, Apache HTTP server, ETL pipelines, MS SQL Server 2008 and above, SQL queries, etc.

Apply here.

Location: Mumbai

Skills Required: HDFS, file, database, JSON, HTML, data warehousing & databases, Hadoop, Hive, Spark, SQL, Oracle, RDBMS, R, Python, etc.

Apply here.

Location: Mumbai

Skills Required: Data mining and machine learning techniques, complex statistical models, SQL, Linux, Python, and R, Big Data technologies like Hadoop, Hive, and/or MapReduce, Spark, Amazon Web Services, Google Cloud Platform, etc.

Apply here.

Location: Bangalore

Skills Required: Qiskit stack, full-stack quantum development, quantum computing, Python, delivery and test-driven development environments, etc.

Apply here.

Location: Bangalore

Skills Required: ML models, ETL process, data structure/algorithm, design patterns, and testing principles, fraud detection & credit risk assessment, etc.

Apply here.

Location: Kolkata

Skills Required: Advanced knowledge of applications programming, system flow and develop standards for coding, testing, debugging, and implementation, micro/macro designing and familiar with Unix Commands and basic work experience in Unix Shell Scripting, etc.

Apply here.

Location: Bangalore

Skills Required: Design, develop and enhance the Marcus Data Platform, data warehousing concepts, especially in the ETL space, advanced SQL knowledge and experience working with relational databases, query authoring (SQL), etc.

Apply here.

Location: Bangalore

Skills Required: Deep learning (e.g., CNN, RNN, LSTM), SQL, R/Python knowledge such as Python, data science libraries, version control, Tensorflow, Keras, Caffe, PyTorch, etc.

Apply here.

Location: Gurgaon

Skills Required: Python/R scripting, common ML/DL algorithms, AWS implementations, Spark, BI Tool, preferably DSS and Tableau, etc.

Apply here.

Location: Hyderabad

Skills Required: Researching, gathering and analysing data, statistical methods and procedures used to obtain data to ensure validity, applicability, efficiency, and accuracy, etc.

Apply here.

Location: Mumbai

Skills Required: Extract and visualise data using SQL queries, building predictive models on Python, etc.

Apply here.

Location: Mumbai

Skills Required: Satellite analytics, document digitisation, image-based recommender system, frameworks like Pytorch, Scikit-learn, Python programming, distributed computing, Azure / AWS / GCP, deep learning solutions using FastAPI, Docker, etc.

Apply here.

Location: Pune

Skills Required: ML libraries and applications, mining and analysing data, Deep Learning, Pyspark, implementing the A/B tests, etc.

Apply here.

Location: Bangalore

Skills Required: Python, SQL, cloud computing, ETL tools, Spark, Airflow, Kafka, Big query, etc.

Apply here.

Location: Bangalore

Skills Required: Data analysis, data mining, modelling, statistical analysis, etc.

Apply here.

Location: Delhi

Skills Required: SQL, Advance Excel, VBA, Python, R, SSRS, Tableau, Domo, Power BI, etc.

Apply here.

Location: Anywhere in India

Skills Required: Data analytics, SQL, analytical Skills & problem-solving skills, stakeholder management skills, etc.

Apply here.

Location: Bangalore

Skills Required: Analytical Skills and Process Orientation, R/Python, SQL, Excel, visualisation tools like R-Shiny, Tableau, etc.

Apply here.

Location: Bangalore

Skills Required: Databases, analytical reasoning, statistical techniques, MySQL, Postgres, Oracle databases and fluent in SQL scripts, XML, Javascript, or ETL frameworks, Excel, SPSS, SAS, etc.

Apply here.

Location: Hyderabad

Skills Required: Machine learning / statistical analytics,A/B testing, experiment design, causal inference, or quasi-experimental methods, etc.

Apply here.

Location: Bangalore

See the original post here:
50 Data Science and Analysts Jobs That Opened Just Last Week - Analytics India Magazine

Read More..

FSS Launches Next Gen Recon with Machine Learning and Cloud Support – TechGenyz

FSS (Financial Software and Systems) a global payment product provider and processor, has introduced critical increments to its market-leading Smart Recon platform.

The new system will leverage ML and automate manually intensive processes and improve the speed, accuracy, and reliability of the payment reconciliation process.

FSS Smart Recon provides an end-to-end, automated solution for reconciliation management across payment workflows, with built-in support for complex, multi-source, multi-file many-to-many reconciliation scenarios.

The notable enhancements include:

Collectively the enhancements provide a 40% improvement in time to market for greenfield implementations, a sizable 30% improvement in reconciliation time cycles, and an overall 25% reduction in direct costs as compared to partially automated processes.

Speaking on the developments, Krishnan Srinivasan, Global Chief Revenue Officer FSS, stated: FSS is a leader in the payment reconciliation space, with 25+ deployments globally. Our Smart Recon solution has been deployed by Tier One banks, neo banks, MNOs, and merchant aggregators.

Across segments and markets, we are seeing significant demand for modernisation of back-office operations. Customers are increasingly pivoting away from in-house payment reconciliation systems with semi-automated processes towards service-based contracts backed by new-age technology platforms.

Speaking on the new increments, Sathish N, Dy CPO, FSS, stated: Payment reconciliations have become exceedingly complex and simplicity and speed are crucial for banks and financial institutions in the wake of an ever-increasing influx of transaction data.

With our continuous innovation-led model and investment in the right technology like Machine learning, AI, and cloud computing, banks, and financial institutions will benefit from flexible and scalable processes and improved speed and accuracy of reconciliation.

FSS Smart Recon deploys advanced unsupervised Machine Learning (ML), algorithms for settlement accelerating the reconciliation cycle by dynamically identifying potential discrepancies at the source.

Algorithm-based settlement processes allow new components to be detected within seconds, lowering process latency, and saving 80% time in performing resolution actions.

FSS Universal Data Wizard catapults time to market for new implementations by 40%. Across deployments, the initial process of configuring proprietary Core Banking and the Switch file formats is the lengthiest step in the implementation cycle.

Data Wizard maintains a meta-repository of Switch and Host file formats for reuse across deployments, saving time and costs on implementation projects.

To further optimize implementation cycle time, FSS Smart Recon has enhanced its critical General Ledger Tally process to support a generic framework for automation of closing balances between General Ledger and Core Banking systems.

FSS Smart Recon functionality can be accessed from the Oracle cloud providing greater deployment flexibility to customers. The availability via a cloud platform transforms the economics of ownership for financial institutions and neo-banks and customers wishing to migrate from legacy systems, in particular.

Deployed by globally leading banks and payment processors, FSS Smart Recon is a unified system for reconciling digital payments and incorporates data import, transformation and enrichment, data matching, exception management, and timely reporting and analytics.

The solution supports a wide diversity of payment classes ATM Recon, Online Commerce, Wallets, Instant Payments (IMPS and UPI), NEFT, RTGS, and QR Code Payments with built-in flexibility to rapidly onboard new payment channels and scheme-based payments.

Financial Software and Systems (FSS) is a leader in payments technology and transaction processing. The company offers an integrated portfolio of software products, hosted payment services, and software solutions built over 29+ years of experience.

FSS, end-to-end payments products suite, powers retail delivery channels including ATM, POS, Internet, and Mobile as well as critical back-end functions including cards management, reconciliation, settlement, merchant management, and device monitoring.

Headquartered in Chennai, India, FSS services leading global banks, financial institutions, processors, central regulators, and governments across North America, UK, Europe, ME, Africa, and APAC and has 2,500 experts on-board.

See the article here:
FSS Launches Next Gen Recon with Machine Learning and Cloud Support - TechGenyz

Read More..

User who turned $200 in Ethereum into $250k due to altcoin bug comes clean – CryptoSlate

Last week, a coin attempting to leech off the success of Yearn.finance (YFI) launched.

Dubbed Soft Yearn (SYFI), the Ethereum-based crypto asset was launched with the premise of emulating the price of 0.0003 YFI through a rebasing mechanism popularized by Ampleforth (AMPL). As the projects website reads:

Soft Yearn is an adaptive cryptocurrency that will expand or contract its supply automatically. A soft pegged currency is tied to another currency. When a soft pegged currency has a significant difference from the main currencys price, the contract or expansion algorithm will converge the market price to the pegged price.

The cryptocurrency saw immediate success.

Uniswap price tracker Chartex recorded the coin rallying around 1,000 percent in the two hours from its launch. At the peak, at $190, the coin had a market capitalization in the millions, despite it only launching just hours earlier.

But just a day after its launch, SYFI crashed by over 99.9 percent to under a cent, with the coin becoming a laughingstock on Twitter as millions of dollars worth of wealth evaporated within moments.

What happened was that there was a bug in SYFIs rebasing mechanism that allowed an anonymous user to obtain a large sum of coins during the rebase to SYFI = 0.0003 YFI, then sell those coins at the pre-rebase price. There was also another bug that incorrectly rebased the coin to the target price.

As a result of these bugs, the user made away with 740 Ethereum, currently valued at around $250,000, despite him only putting up around $200 worth of ETH as starting capital.

Today, that user revealed himself and came clean, telling the story of what happened and what comes next.

On the morning of Sep. 6, a Twiter user with the handle Amplify released a 30-part Twitter thread, revealing that it was he that pulled off the impossible.

I am the person who sold $SYFI on uniswap at the same time as the Rebase. Or, I am the person who exploited the rebase bug in $SYFI.

After identifying himself as someone with a small crypto account of under $5,000 and someone that has dabbled in arbitrage, Robinhood, stocks, and many other investment venues, he got into what exactly happened on that fateful day.

Amplify explained that he found SYFI through Telegram groups, where users regularly ape into coins with perceived intrinsic value for exponential profits. He eventually stumbled across SYFI, which he claimed he made an instant 200 percent of one ETH by buying the coin when it listed on Uniswap, then selling the top.

SYFI was heavily promoted because its ICO, conducted through an Ethereum-based platform called Bounce, sold out within a minute according to some users.

A day after the coins launch on Uniswap, the Soft Yearn team prepared for the coins first rebase.

Amplify, expecting the coin to potentially rally after the rebase took place, decided to buy back in with 0.5 ETH.

Due to the aforementioned bug, when the rebase happened, he was offered the opportunity to sell his sudden influx of SYFI coins for 740 Ethereum, again valued at around $250,000.

He took the opportunity. Chances are, there were likely other users looking at the exact same thing as he was, but didnt pull the trigger:

I didnt know what would happen or where this fantastical made-up number in ETH would come from Now, looking back. This was going to happen regardless. If I didnt push that transaction through with 500 gwei, someone else would have (and likely did) with 490 gwei.

And as we now know, Amplify succeeded, having drained the liquidity from the Soft Yearn pool, leaving most users with coins worth basically nothing.

While he didnt explicitly say that he would be returning funds, the user has been gracious in his replies, pledging to return some of the funds to users who verifiably held SYFI or liquidity provider tokens at the time of the rebase. He added that he will be donating some of the capital to a Gitcoin grant of his choosing.

SYFI took a big reputational blow. Still, the anonymous founders behind the project intend on moving forward with a relaunch of the coin.

They wrote in a recent Discord post:

The Soft Yearn team is devastated by this event. We understand everyones frustration and anger, please bear with us while we work on sYFIV2 to fix things. Although this was a major setback, we believe we can come back stronger as we initiate our migration plan.

Its worth noting that a project attempting to accomplish a similar feat to Soft Yearn, the Chainlink-focused Soft LINK, has a mere $1.3 million market capitalization three weeks after its launch.

Like what you see? Subscribe for daily updates.

Originally posted here:
User who turned $200 in Ethereum into $250k due to altcoin bug comes clean - CryptoSlate

Read More..

Bitcoin Crashes Below $10,000, Is The ‘Altcoin & DeFi Apocalypse’ Over Yet? – Bitcoin Exchange Guide

While the stock market is closed today, recognizing Labor Day, the crypto market is bleeding.

For now, in another red day to mark the start of a new week, cryptocurrencies had a repeat performance. Much like last week, bitcoin dropped below $10,000 to as low as $9,880 on Bitstamp, albeit briefly.

The real trading volume meanwhile remains weak at just $1.4 billion.

One more flush before a bounce seems likely. Stocks probably go up tomorrow. Corn could rally with it, said one trader.

However, if we look back at the last bull cycle, the average correction is 35%. Given the fact BTC has only retraced 20% so far, another drop could see us below $8,000.

Meanwhile, Bitcoins Spent Output Profit Ratio (SOPR), which highlights when the average stakeholder is in a state of profit or loss, dipped below 1 for the first time since April. Currently, it is hovering right at the neutral line. Rafael Schultze-Kraft CTO at crypto data provider Glassnode noted,

This means bitcoins moved on-chain at a (small) loss, potentially shaking out some weak hands. Imo it is very crucial to hold this level here so a bearish trend reversal doesn't get confirmed.

While bitcoin is at the precipice, Ether is getting beaten hard. The second-largest cryptocurrency lost nearly 35% of its value last week to drop to $320. Currently, ETH/USD is around $335. Analyst Rekt Capital said,

Indeed the $360 has switched into a resistance, offering lower prices. Price breaks back into the $160-$360 range. Very low $300s is a real possibility going forward. $290 would be perfect.

The spark plug for these losses was a decline of 6.1% in the Ether balance on top 100 exchanges, from 16.92 million to 15.89 million over the past week.

Last week, before crashing, ETH price jumped to a new 2020 high last seen in June 2018, which led to an increase in this exchange's balance as investors and traders took off some profits.

The good thing is, despite the drop in price, the Ethereum network keeps on growing, with the number of addresses with a balance in ETH hitting a new all-time high on the weekend at 45.88 million addresses, as per data source IntoTheBlock.

This means people are buying the dips. These addresses have increased by 34.8% since the beginning of 2020.

The silver lining to this plunge in ETH price is DeFi will yet again benefit from the temporary collapse in Ethereum gas fees, and the vicious loop of chasing higher yield will resume yet again until the pressure cooker can no longer hold, said Denis Vinokourov of Bequant.

A crash in Ether prices is not good news for altcoins, especially DeFi tokens.

However, unlike last weeks 10% to 20% losses, the altcoins are down 2% to 10% today. Among the top altcoins, LINK with 3.70% gains and BSV 5.97% are the only exceptions.

As for the DeFi tokens, in the past hour, they all have turned green fast with CRV, SRM, and JUST up 8%.

But in the past 24 hours, SNX and CRV are down over 11%, RUNE 9.4%, SRM 9.3%, YFI 8.5%, KAVA 8.1%, BAND 7.2%, REN 6.6%, LRC 5.1%, KNC 4.7%, LEND 3.1%, COMP 1.7%, AMPL 1.4%, and WBTC 1%.

The biggest loser in the past seven days has been Ampleforth, which lost over 63% of its value with other notable mentions, including Melon (50%), Bancor (47%), and Curve (46%), as per CoinGecko.

As a result, the total value locked in the DeFi sector also dropped by 21% to $7.5 billion.

While more losses could be in order, depending on Bitcoins next move, which itself is waiting for the stock market, trader and economist Alex Kruger says the alts apocalypse the market experienced won't happen again even if BTC were to go down.

Alts apocalypse was the leading digital currency losing 5% of its value resulting in alts crashing 20-50%, which was extraordinary because Multiplier is usually in the 1.5-3x, not 4-10x.

Feel confident alts bottomed. BTC may flirt again with 9Ks. But alts bottomed. What the market saw yesterday was total obliteration. Strong hands remained, weak hands folded, said Kruger.

More here:
Bitcoin Crashes Below $10,000, Is The 'Altcoin & DeFi Apocalypse' Over Yet? - Bitcoin Exchange Guide

Read More..

YFI Climbs 50% From $18K Low While The Rest Of Crypto Stagnates, But Why? – NewsBTC

As incredible as Chainlinks rise has been across the crypto market, it has been eclipsed by the emergence of one extremely rare coin that has since become even more expensive than Bitcoin itself: Yearn.Finance (YFI).

The ultra-scarce DeFi digital asset has been defying gravity since its debut. And although there was a massive $20,000 collapse per token from local highs, the asset has recovered over 50% while Bitcoin and Ethereum stagnate. But whats the reason for YFIs runaway success?

The DeFi trend has taken a wild and wacky turn, bringing back memories of the ICO boom, complete with investors being burned by the hottest new token.

Pizza and Hot Dogs fresh out of the DeFi oven have left many crypto investors taking the plunge with a bad taste in their mouths, elsewhere in the DeFi space assets have been far more rewarding.

Take Yearn.Finance for example. This sizzling hot DeFi altcoin has grown from $5,000 to just under $40,000 at the high. The peak price is currently four times as valuable as Bitcoins price over the last week.

Related Reading | Pizza & Hot Dogs: How Uniswaps Profit Buffet Can Burn Crypto Investors

Bitcoin, Ethereum, and other top crypto assets have been consolidating at support in an attempt to hold strong. YFI, however, has left these powerhouses in its dust, showing off just how bullish the momentum has been.

Its this bullish momentum that has helped YFI regain as much as 50% of its recent losses, while the two top reigning crypto-assets continue to perform poorly

YFIs more than 50% recovery stopped short as daily resistance but may have found support at $22,000 per token. Four daily closes above that key level gave bulls enough confidence for another push higher.

Related Reading | Despite BTC Drop $10k, Top Ethereum DeFi Coins Undergo Strong Bounce

The fall to daily support also aligns with a retest of the mid-Bollinger Band, that thus far been holding. A retest of the mid-BB that holds strong often rises again towards the top of the Bollinger Bands.

If that acts as the next target, YFI could soar to retest $36,000 in the days ahead. If the buzzing DeFi token can get through there, a new all-time may be set.

As for whats fueling YFIs enormous rally, DeFi is currently an unstoppable trend luring investors in with its appetizing buffet of profits. The token rising from $5,000 to over $30,000 so quickly has caught the attention of all crypto market participants.

Only 30,000 YFI tokens exist, giving the asset even more scarcity than Bitcoin. The low supply is also a primary reason for the high price per coin, and why the assets price is so extremely volatile. That same volatility, however, has made YFI clearly more attractive to crypto investors than even Bitcoin or Ethereum recently, as the asset is easily beating them out in momentum.

Read the original here:
YFI Climbs 50% From $18K Low While The Rest Of Crypto Stagnates, But Why? - NewsBTC

Read More..