Page 2,065«..1020..2,0642,0652,0662,067..2,0702,080..»

Democrats ask Apple, Google to prohibit apps from using data mining to target people seeking abortions – The Hill

A group of Senate Democrats asked Apple and Google to prohibit apps available in their app stores from using data mining practices to target people seeking abortion services in letters sent to the tech giants Friday.

The senators, led by Sen. Ed Markey (Mass.), asked the companies to review and update their app store policies and practices as necessary to address concerns around allowing apps to engage in data practices that may victimize individuals who seek or have sought abortion services.

The Democrats said they are concerned that anti-abortion prosecutors or other actors will try to access and leverage personal information, such as data about locations, online activity, health and biometrics, in ways that threaten the well being of those exercising their right to choose.

The senators underscored the need for the updates following the leak of a draft majority opinion indicating the Supreme Court is poised to overturn Roe v. Wade, which could make abortion illegal in states across the country with restrictive laws.

The senators said information about app users fertility, browser histories indicating interest in contraception or location information showing a user visited a gynecologist could become a data trove for actors who are intent on targeting, intimidating, and harming individuals who seek abortions or individuals who simply take steps to promote their reproductive health.

The letters were also signed by Sens. Elizabeth Warren (D-Mass.), Ron Wyden (D-Ore.), Bernie Sanders (I-Vt.) and Cory Booker (D-N.J.).

Spokespeople for Apple and Google did not immediately respond to requests for comment.

The letters follow one sent by more than 40 Congressional Democrats to Google earlier this week urging the company to stop collecting and storing location data over concerns that it could be obtained by prosecutors to target individuals seeking abortion care.

See the article here:

Democrats ask Apple, Google to prohibit apps from using data mining to target people seeking abortions - The Hill

Read More..

Ultima Genomics partners with Sentieon and Google DeepVariant to deliver high-performance variant calling – PR Newswire

NEWARK, Calif., May 31, 2022 /PRNewswire/ -- Ultima Genomics, Inc. today announced partnerships with Sentieon and Google DeepVariant to offer highly accurate and accelerated variant calling for customers who wish to run pipelines at scale. Both DeepVariant and Sentieon have developed Ultima-specific machine learning models which will be made available in the coming months. Ultima's early access customers will be offered a Sentieon license to process Ultima data at no cost through the end of 2022.

Over the last 5 years, Ultima Genomics has developed a fundamentally new sequencing architecture designed to scale beyond conventional approaches, including completely different flow cell engineering, sequencing chemistry, and machine learning.Ultima is currently in an early access program for the UG 100, its first high-throughput NGS instrument using this new technology architecture which enables customers to generate high-quality sequencing data at a cost of $1/Gb.

"Ultima's mission is to continuously drive the scale of genomic information," said Gilad Almogy, CEO of Ultima Genomics. "As we prepare to launch our first high-throughput sequencer, we are excited to partner with informatics leaders such as Google and Sentieon to continually improve and accelerate the generation of high-quality data at scale."

Google DeepVariant

DeepVariant is an open-source technology that identifies genetic variants in sequencing data using deep learning. DeepVariant is noted for its high accuracy, with awards in multiple PrecisionFDA challenges, and for its extensibility to new sequencing methods. DeepVariant is available under the BSD open source license.

"I'm excited by the potential for new methods to generate genomic data at much greater scales", said Andrew Carroll, Product Lead for Genomics in Google Research. "I'm particularly impressed by the technical abilities of the Ultima informatics team, who are the first external team to extend DeepVariant's input channels to capture additional rich information specific to their sequencer in order to further improve accuracy."

Sentieon

Sentieon, a developer of highly accurate and optimized algorithms for bioinformatics applications, has created tools that are faster and cheaper alternatives to the industry standard for secondary analysis in NGS data processing. Sentieon technology provides high accuracy and ease of use, while dramatically improving processing speed.

"We are delighted to support Ultima's UG 100 platform as it increases the potential for scale in the high-volume sequencing market," said Jun Ye, CEO of Sentieon."This partnership enables Ultima's customers to gain additional efficiencies when generating high-quality sequencing data by lowering the cost of computing. During our collaboration, we've already seen significant improvements in sequencing quality, and look forward to continued optimization of our solutions."

Data from both the Sentieon and DeepVariant models will be shared at the upcoming AGBT conference June 6-9th, 2022.

About Ultima GenomicsGenomics is unleashing the power of genomics at scale.The Company's mission is to continuously drive the scale of genomic information to enable unprecedented advances in biology and improvements in human health. With a virtually endless need for more genomic information to address biology's complexity and dynamic change, it is important to continue to challenge conventional next-generation sequencing technologies. Ultima has built a revolutionary new sequencing architecture designed to scale far beyond conventional technologies and overcome the cost-driven tradeoffs scientists and clinicians are forced to make between the breadth, depth and frequency with which they use genomic information. By driving down the costs of sequencing, Ultima hopes to catalyze the next phase of genomics in the 21st century.

About GoogleGoogle's mission is to organize the world's information and make it universally accessible and useful. Through products and platforms like Search, Maps, Gmail, Android, Google Play, Chrome, and YouTube, Google plays a meaningful role in the daily lives of billions of people and has become one of the most widely-known companies in the world. Google is a subsidiary of Alphabet Inc. For more information, visit our Google AI Blog.

About SentieonSentieon develops award-winning, extremely accurate and highly-optimized algorithms and software for bioinformatics data processing applications. Sentieon is a team of professional engineers with deep expertise in algorithm, software, and system optimization acquired through experience developing image processing, computational lithography, large-scale data mining, and bioinformatics applications. Using our accumulated expertise in modeling, optimization, machine learning, and high-performance computing, we strive to enable precision data for precision medicine. Visit our website for more information.

Media inquiries: [emailprotected]

SOURCE Ultima Genomics

Read the rest here:

Ultima Genomics partners with Sentieon and Google DeepVariant to deliver high-performance variant calling - PR Newswire

Read More..

Progress of Bitcoin Mining in Mississippi | Branded Voices | Advertise – Native News Online

Details

Since the launch of Bitcoin in 2009, Mississippi has been a hotbed for Bitcoin mining activity.

The state's large population and cheap electricity have made it an attractive destination for miners.

In recent years, Mississippi has seen a surge in Bitcoin mining activity. This is largely due to the state's cheap electricity. Bitcoin Code is considered one of the best platforms for providing profitable bitcoin trades.

According to a recent report, Mississippi has the second-lowest electricity rates in the United States. This makes it an ideal location for miners who are looking to save on their energy costs.

Mississippi is also home to a number of large data centers. These facilities are often used by miners to store and process their mined bitcoins.

The state's infrastructure and favorable climate have made it a popular destination for mining operations.

Recent data indicates that Mississippi has become one of the leading states for Bitcoin mining. This is likely due to the combination of cheap electricity and favorable conditions for miners.

The progress of bitcoin mining in Mississippi is evident. In just a short amount of time, the state has become a hub for cryptocurrency activity. With more and more people taking interest in this new form of currency, it is clear that bitcoin mining is here to stay in Mississippi.

There are currently three main bitcoin mines operating in Mississippi. Each one is constantly expanding its operations and working to attract new miners. The three mines are located in different parts of the state, which allows for a robust mining operation.

The first mine is located in central Mississippi. It began operations in early 2018 and has since then been working non-stop to expand its capabilities. The mine is currently home to over 1,000 miners who are constantly working to mine new bitcoins.

The second mine is located in southern Mississippi. It began operations in late 2018 and has since then been steadily growing its mining operation. The mine is currently home to over 500 miners who are constantly working to mine new bitcoins.

The third and final bitcoin mine is located in northern Mississippi. It began operations in early 2019 and has since then been working diligently to expand its capabilities. The mine is currently home to over 300 miners who are constantly working to mine new bitcoins.

All three of these mines are making incredible progress in the world of bitcoin mining. They are attracting new miners, expanding their operations, and constantly working to improve their infrastructure. This progress is good news for Mississippi and is a sign that the state is quickly becoming a hub for cryptocurrency activity.

As the world's leading cryptocurrency, Bitcoin has seen a lot of mining activity over the years. And Mississippi is no exception. In fact, the state is home to one of the largest Bitcoin mines in North America.

But what does the future hold for Bitcoin mining in Mississippi?

Some believe that the state's cheap electricity and ample land could make it a hub for Bitcoin mining. Others, however, think that regulation could stifle the industry's growth.

Only time will tell what the future of Bitcoin mining in Mississippi will be. But one thing is for sure: the state will play a major role in shaping the future of this burgeoning industry.

The future of Bitcoin mining in Mississippi is shrouded in uncertainty. The state has seen a decline in the number of miners over the past few years, and it is unclear if this trend will continue. Mississippi's electricity rates are among the highest in the United States, which makes it difficult for miners to turn a profit. Additionally, the state does not have any major Bitcoin mining facilities, so there is little infrastructure in place to support a large-scale mining operation.

That said, there are still some people who are optimistic about the future of Bitcoin mining in Mississippi. The state's cheap electricity and lack of regulations could make it an attractive destination for miners looking to set up shop. If more miners move to Mississippi, they could help build the infrastructure that is needed to support a thriving mining industry. Only time will tell if Mississippi can become a major hub for Bitcoin mining.

DISCLAIMER: Branded Voices features paid content from our marketing partners. Articles are not created by Native News Online staff. The views and opinions expressed in the Branded Voices are those of the authors and do not necessarily reflect the official policy or position of Native News Online or its ownership. Any content provided by our bloggers or authors are of their opinion and are not intended to malign any religion, ethnic group, club, organization, company, individual or anyone or anything.

Read this article:

Progress of Bitcoin Mining in Mississippi | Branded Voices | Advertise - Native News Online

Read More..

Cloud hiring levels in the mining industry rose in April 2022 – Mining Technology

The proportion of mining industry operations and technologies companies hiring for cloud-related positions rose in April 2022 compared with the equivalent month last year, with 32.4% of the companies included in our analysis recruiting for at least one such position.

This latest figure was higher than the 30.1% of companies that were hiring for cloud-related jobs a year ago but a decrease compared to the figure of 47.8% in March 2022.

When it came to the rate of all job openings that were linked to cloud, related job postings dropped in April 2022 from March 2022, with 4.6% of newly posted job advertisements being linked to the topic.

This latest figure was an increase compared to the 2.9% of newly advertised jobs that were linked to cloud in the equivalent month a year ago.

Cloud is one of the topics that GlobalData, from which our data for this article is taken, has identified as being a key disruptive force facing companies in the coming years. Companies that excel and invest in these areas now are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

Our analysis of the data shows that mining industry operations and technologies companies are currently hiring for cloud jobs at a rate lower than the average for all companies within GlobalData's job analytics database. The average among all companies stood at 6.9% in April 2022.

GlobalData's job analytics database tracks the daily hiring patterns of thousands of companies across the world, drawing in jobs as they're posted and tagging them with additional layers of data on everything from the seniority of each position to whether a job is linked to wider industry trends.

You can keep track of the latest data from this database as it emerges by visiting our live dashboard here.

Mine Closure, Project Management, Environmental Advisory and Site Environmental Management Services

See the article here:

Cloud hiring levels in the mining industry rose in April 2022 - Mining Technology

Read More..

Remote learning apps sent kids data and behavioral information to advertisers – 9to5Mac

Millions of children who used remote learning apps during the pandemic have had their personal data and even behavioral information shared with advertisers, says a new report.

This includes apps which students were required to use by their schools, says Human Rights Watch

The organization investigated a total of 164 apps and websites endorsed by governments across multiple countries.

Governments of 49 of the worlds most populous countries harmed childrens rights by endorsing online learning products during Covid-19 school closures without adequately protecting childrens privacy, Human Rights Watch said in a report released today []

Of the 164 EdTech products reviewed, 146 (89 percent) appeared to engage in data practices that risked or infringed on childrens rights. These products monitored or had the capacity to monitor children, in most cases secretly and without the consent of children or their parents, in many cases harvesting personal data such as who they are, where they are, what they do in the classroom, who their family and friends are, and what kind of device their families could afford for them to use.

Most online learning platforms examined installed tracking technologies that trailed children outside of their virtual classrooms and across the internet, over time. Some invisibly tagged and fingerprinted children in ways that were impossible to avoid or erase even if children, their parents, and teachers had been aware and had the desire to do so without destroying the device.

The group found that most online learning platforms shared childrens data with adtech companies, who then used it not just for personalized ads, but also to influence website feeds.

Many more EdTech products sent childrens data to AdTech companies that specialize in behavioral advertising or whose algorithms determine what children see online []

These companies not only distorted childrens online experiences, but also risked influencing their opinions and beliefs at a time in their lives when they are at high risk of manipulative interference.

Families often lacked the ability to opt out of remote learning apps with poor privacy policies.

Some governments made it compulsory for students and teachers to use their EdTech product []

Most EdTech companies did not allow students to decline to be tracked; most of this monitoring happened secretly, without the childs knowledge or consent. In most instances, it was impossible for children to opt out of such surveillance and data collection without opting out of compulsory education and giving up on formal learning during the pandemic.

Human Rights Watch says that it will share its evidence with anyone who wants to verify the findings, or conduct further analysis.

A previous study found that things are even worse on Android, with educational apps on that platform eight times more likely to share personal data than iOS ones.

FTC: We use income earning auto affiliate links. More.

Check out 9to5Mac on YouTube for more Apple news:

Follow this link:

Remote learning apps sent kids data and behavioral information to advertisers - 9to5Mac

Read More..

Big Data Analytics Market Worth $638.66 Billion, Globally, by 2028 at 15.3% CAGR – Exclusive Report by The Insight Partners – PR Newswire

NEW YORK, May 25, 2022 /PRNewswire/ -- The Insight Partners published latest research study on "Big Data Analytics Market Forecast to 2028 COVID-19 Impact and Global Analysis by Component (Software and Services), Analytics Tool (Dashboard and Data Visualization, Data Mining and Warehousing, Self-Service Tool, Reporting, and Others), Application (Customer Analytics, Supply Chain Analytics, Marketing Analytics, Pricing Analytics, Workforce Analytics, and Others), and End Use Industry (Pharmaceutical, Semiconductor, Battery Manufacturing, Electronics, and Others)", the global big data analytics market is expected to grow from $239.75 billion in 2021 to $638.66 billion by 2028; it is estimated to grow at a CAGR of 15.3% from 2022 to 2028.

The Sample Pages Showcases Content Structure and Nature of Information Included in This Research Study Which Presents A Qualitative and Quantitative Analysis:https://www.theinsightpartners.com/sample/TIPRE00029289/

Report Coverage

Details

Market Size Value in

US$ 239.75 Billion in 2021

Market Size Value by

US$ 638.66 Billion by 2028

Growth rate

CAGR of 15.3% from 2022 to 2028.

Forecast Period

2022-2028

Base Year

2022

No. of Pages

249

No. Tables

115

No. of Charts & Figures

97

Historical data available

Yes

Segments covered

Component, Analytics Tool, Application, and End Use Industry

Regional scope

North America; Europe; Asia Pacific; Latin America; MEA

Country scope

US, UK, Canada, Germany, France, Italy, Australia, Russia, China, Japan, South Korea, Saudi Arabia, Brazil, Argentina

Report coverage

Revenue forecast, company ranking, competitive landscape, growth factors, and trends

Big Data Analytics Market: Competitive Landscape and Key Developments

Amazon Web Services, FICO, Hewlett Packard Enterprise Development LP (HPE), IBM Corporation, Microsoft Corporation, Oracle Corporation, SAP SE, SAS Institute Inc., TIBCO Software Inc., and Teradata are among the key players profiled during the study of the global big data analytics market. In addition, several other essential market players were studied and analyzed to get a holistic view of the market and its ecosystem.

Schedule A Pre-Sale Discussion with The Author Team in A Slot That You Prefer to Address Queries on Scope of The Study, Customization, Introduction to Research Methodology, Assistance on Technologies and Market Definitions:https://www.theinsightpartners.com/inquiry/TIPRE00029289/

In July 2021, Oracle Big Data Service (BDS), Oracle's fully managed Hadoop service, received a substantial update from Oracle Cloud Infrastructure (OCI). The updated BDS version takes advantage of Oracle's Apache Hadoop Distribution (ODH). This version of BDS is designed from the bottom up to be cloud-scale. It is natively integrated with OCI, making it easier for its clients to lift and shift or create entirely new analytics solutions on Hadoop's capabilities, such as HIVE, Spark, and Hbase.

In December 2020, Amazon Web Services (AWS) unveiled Amazon HealthLake, a HIPAA-compliant service aimed at supporting interoperability standards and driving the use of big data analytics in healthcare.

Big data analytics is a vital component of businesses as it enables them to manage, process, and streamline massive datasets in real-time while improving the decision-making capabilities of these companies. Further, the primary goal of big data analytics solutions is to support enterprises to understand their customers and target population better, hence optimizing marketing campaigns. Furthermore, big data analytics is used by various businesses to examine huge amounts of data created by offline and online commerce. Moreover, with the emergence of cloud-based big data analytics, users can access data from anywhere. This helps several big data analytics service providers to increase their investment in cloud technology by having a competitive edge over other service provider.

Click Here to Avail Lucrative DISCOUNTS on Our Latest Research Reports. We Offer Student, Enterprise, and Special Periodic Discounts to Our Clientele. Please Fill the Form to Know DISCOUNTED PRICE

The adoption of big data analytics software by several companies to deliver enhanced & faster decision-making solutions and provide a competitive advantage by analyzing and acting on information on time is driving the global big data analytics market forward. Further, the market's growth is aided by a surge in demand for cloud-based big data analytics software among small and medium businesses. However, the market's expansion is projected to be hampered by high implementation costs and a lack of skilled labor. The rising demand for better insights for corporate planning and emerging trends, including social media analytics, is expected to provide significant prospects for market players during the forecast period. These factors are projected to have a definite impact on the growth of the big data analytics market.

Increasing strategic initiatives by the major market players drive the big data analytics market growth. In May 2021, Telefnica Tech, a digital business organization, and Microsoft partnered on Azure Edge Zone. Furthermore, Telefnica's 5G connection and Microsoft's Edge computing capabilities are combined in this collaboration, allowing industrial processes to accelerate digital transformation. In May 2020, IBM and RedHat developed New Edge computing technologies to help telecommunication organizations adapt to the 5G era. Furthermore, IBM has dedicated edge computing and Telco network cloud services teams to help customers deploy 5G and edge-enabled solutions across all industries.

Big Data Analytics Market: Component Segment Overview

Based on component, the big data analytics market is bifurcated into software and services. In 2021, the software segment led the market, accounting for a larger market share. Also, the segment is expected to retain its dominance during the forecast period. Moreover, the services segment is expected to register a higher CAGR in the global big data analytics market from 2022 to 2028.

Have A 15-Minute-Long Discussion with The Lead Research Analyst and Author of The Report in A Time Slot Decided by You. You Will Be Briefed About the Contents of The Report and Queries Regarding the Scope of The Document Will Be Addressed as Well: https://www.theinsightpartners.com/speak-to-analyst/TIPRE00029289

Impact of COVID-19 Pandemic on SAM Big Data Analytics Market:

Brazil recorded the highest number of COVID-19 cases in SAM, followed by Ecuador, Chile, Peru, and Argentina. In 2021, the COVID-19 pandemic led to the closure of economic activities across SAM to combat the spread of SARS-CoV-2. Governments in SAM took various actions to protect their citizens and contain the spread of the virus infection.

The presence of various developing countries in SAM makes it one of the key markets for the future growth of the big data analytics market. Growing population, rising disposable income, increasing demand for advanced technologies, and rising focus on digital transformation are a few of the key factors expected to drive the market growth in SAM in the coming years. With an increase in the adoption of work-from-home trends during lockdowns, organizations realize the importance of big data analytics systems to foresee security threats using analytics software. The value of big data analytics has increased exponentially during the COVID-19 pandemic as the number of threats that can destroy the infrastructure of a business and its systems increased, boosting the big data analytics market growth in the region.

Immediate Delivery of Our Off-The-Shelf Reports and Latest Research Studies, Through Flexible and Convenient Payment Methods: https://www.theinsightpartners.com/buy/TIPRE00029289/

Browse Adjoining Reports:

Big Data and Business Analytics Market Forecast to 2028 - COVID-19 Impact and Global Analysis by Component (Software, Service); Deployment Type (Cloud, On-premises); Analytics Tools (Dashboard and Data Visualization, Self-Service Tools, Data Mining and Warehousing, Reporting, Others); Application (Customer Analytics, Supply Chain Analytics, Marketing Analytics, Risk and Credit Analytics, Workforce Analytics, Others); End-user (BFSI, IT and Telecom, Retail and Consumer Goods, Manufacturing, Healthcare, Government and Education, Others) and Geography

Hadoop Big Data Analytics Market to 2028 COVID-19 Impact and Global Analysis by Component (Solutions, Services); Application (Risk and Fraud Analytics, IoT, Customer Analytics, Security Intelligence, Others); Industry Vertical (BFSI, IT and Telecommunication, Government and Defense, Manufacturing, Transportation and Logistics, Healthcare, Others) and Geography

Big Data Analytics in Semiconductor and Electronics Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Component (Software, Services); Analytics Tool (Dashboard and Data Visualization, Data Mining and Warehousing, Self-service Tools, Reporting, Others); Application (Customer Analytics, Supply Chain Analytics, Marketing Analytics, Pricing Analytics, Workforce Analytics, Others); End-user (Semiconductor, Electronics) and Geography

Insurance Big Data Analytics Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Component (Software, Services); Application (Pricing Premiums, Prevent and Reduce Fraud, Gain Customer Insight, Others) and Geography

Big Data Analytics In Healthcare Market Forecast to 2028 - Covid-19 Impact and Global Analysis - By Component (Software, Services); Deployment Type (On-Premises, Cloud); Type (Descriptive Analytics, Predictive Analytics, Prescriptive Analytics, and Diagnostic Analytics); and Application (Clinical Analytics, Financial Analytics, and Operational Analytics)

Data Visualization Platform Market Forecast to 2028 - Covid-19 Impact and Global Analysis - by Deployment (Cloud-based, On-premise); Organizational Department (Marketing, Executive Management, Operations, Sales, Others); Industry Vertical (BFSI, IT and Telecom, Government, Education, Retail and E-commerce, Manufacturing, Others) and Geography

Data Warehousing Market Forecast to 2028 - Covid-19 Impact and Global Analysis - by Offering (Extraction Transportation and Loading (ETL) Solutions, Statistical Analysis, Data Mining, Others); Type Of Data (Unstructured Data, Semi-structured and Structured Data); Deployment Type (On-premise, Cloud, Hybrid); Enterprise Size (Small and Medium-sized Enterprises, Large Enterprises); Industry Vertical (BFSI, Media and Entertainment, Manufacturing, IT and Telecom, Retail, Others) and Geography

Customer Analytics Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Component (Solution, Services); Deployment Type (On-premises, Cloud); Enterprise Size (Small and Medium-sized Enterprises, Large Enterprises); End-user (BFSI, IT and Telecom, Media and Entertainment, Consumer Goods and Retail, Travel and Hospitality, Others) and Geography

Supply Chain Analytics Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Platform (Software, Service); Deployment Type (On-Premise Deployment, Cloud Deployment, Hybrid Deployment); Industry Vertical (Retail, Manufacturing, Automotive, Food and Beverages, Others) and Geography

Marketing Analytics Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Component (Software, Services); Application (Social Media, Content Optimization, Campaign Management, Email Marketing, Others); Industry Vertical (Retail and Consumer Goods, Healthcare, BFSI, Travel and Hospitality, Automotive, Telecommunication, Others) and Geography

Edge Computing Market Forecast to 2028 - Covid-19 Impact and Global Analysis - by Component (Hardware, Software, Services); Application (Smart Grids, Connected Cars, Traffic Management, Critical Infrastructure Monitoring, Environmental Monitoring, Security and Surveillance, Assets Tracking, Others); End-user (Energy and Utilities, Healthcare, Transportation and Logistics, Retail, Manufacturing, Media and Entertainment, IT and Telecom, Others) and Geography

Workforce Analytics Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Type (Services, Solutions); Deployment Type (Cloud Based, On-Premises); Industry Vertical (IT and Telecom, Retail, Manufacturing, Healthcare and Research, BFSI, Education, Media and Entertainment, Travel and Hospitality) and Geography

Business Analytics Market Forecast to 2028 - Covid-19 Impact and Global Analysis - by Application (Supply Chain Analytics, Spatial Analytics, Workforce Analytics, Marketing Analytics, Behavioral Analytics, Risk And Credit Analytics, and Pricing Analytics); Deployment (On-Premise, Cloud, and Hybrid); End-user (BFSI, IT & Telecom, Manufacturing, Retail, Energy & Power, and Healthcare)

About Us:

The Insight Partners is a one stop industry research provider of actionable intelligence. We help our clients in getting solutions to their research requirements through our syndicated and consulting research services. We specialize in industries such as Semiconductor and Electronics, Aerospace and Defense, Automotive and Transportation, Biotechnology, Healthcare IT, Manufacturing and Construction, Medical Device, Technology, Media and Telecommunications, Chemicals and Materials.

Contact Us:

If you have any queries about this report or if you would like further information, please contact us:

Contact Person:Sameer Joshi

E-mail: [emailprotected]

Phone:+1-646-491-9876

Press Release: https://www.theinsightpartners.com/pr/big-data-analytics-market

SOURCE The Insight Partners

Read the original:

Big Data Analytics Market Worth $638.66 Billion, Globally, by 2028 at 15.3% CAGR - Exclusive Report by The Insight Partners - PR Newswire

Read More..

A high-speed railway network dataset from train operation records and weather data | Scientific Data – Nature.com

To obtain the high-speed railway network dataset, we first collect the train operation records, mileage information and the geographical locations of the railway stations. The historical weather related data are collected based on the geographical locations, and the dates of major holidays from October 8, 2019 to January 27, 2020 are obtained. Second, we calculate the arrival and departure delay time of one train and count the number of delayed trains per hour in different directions of one station. Third, compute the mileage of adjacent stations. Fourth, train operation conditions of Chinas top ten junctions are statistics. Fifth, according to the geographical locations and time stamps, the train directions, station types, weather, holidays and other complex factors are expanded to the operation data of high-speed trains and delay data of railway stations. Finally, we check and validate our dataset.

Figure1 shows the flowchart of methodology to obtain the high-speed railway network dataset from train operation records and weather data. The steps involved are described in detail below.

Flow chart of methodology. The figure shows the flowchart of methodology to obtain the high-speed railway network dataset from train operation records and weather data.

The source data for the high-speed railway network dataset consists of the high-speed trains operation data, the high-speed trains mileage data, the locations of railway stations, the junction stations, the weather related data and the major holidays.

High-speed train operation records consist of the historical schedule and actual operation information. We use the web scraping method with python28 to obtain 2,751,713 running data of 3,399 trains from China Railway Ticket System (https://www.12306.cn), from October 8, 2019 to January 27, 2020, 16 weeks in total. The operation records of one train consist of stopping stations, scheduled departure and arrival time, actual departure and arrival time, etc. Fig.2 shows China high-speed railway network, the 727 stations and actual operation lines of 3,399 trains are included.

China high-speed railway network. The figure shows the actual operation network of China Railway High Speed, which includes the 727 railway stations and 3,399 high-speed trains in the dataset. (a) shows location of stations. (b) shows the railway lines.

According to the train operation records, we use the web scraping method to obtain the operating mileage of 3,399 trains from http://www.huochepiao.com. We obtain the data updated to 2020 because the railway routes are constantly adjusted. The attributes contained in the data include train number, station order, station name and the mileage between one station and the departure station. We supplement the missing mileage data by manual search.

We get 727 stations after deleting the duplicates based on the 3,399 high-speed trains operating lines. The names of these stations are unique. Then, we get the geographic locations of them, which include the province, city and district. We supplement the missing location information by manual search.

In the railway network, the connection place of several trunk lines is generally called railway junction, which is composed of several stations, inter-station connecting lines, inbound lines and signals. In the dataset, we consider ten representative junctions in China, the stations are shown in Table1.

It is reported that the operation of high-speed train is affected by climate, such as strong wind, low temperature and torrential rain. So we consider weather, wind power, and temperature as external influential factors to make the dataset more valuable for research. We crawl the data for 16 weeks from a website (http://www.tianqihoubao.com) that records historical weather related data by matching the districts where the stations located in. The data contains a total of 81,242 weather related samples from 727 districts.

We use the Scrapy-Redis multi-task asynchronous framework to crawl the above data and store them in MongoDB database. To improve the efficiency of I/O operations, we use mongoexport to store the data in a csv file.

It is well known that the passenger flow is also an important factor influencing train operation. When multiple trains are late, dispatchers often need to decide the train departure order based on the capacity and the real number of passengers of one station. However, we can not accurately obtain the real number of passengers at one station due to the high mobility of passengers. Luckily, it is clear that the number of passengers tends to be higher than usual during the holidays, especially major holidays, such as Spring Festival and National Day. Therefore, we take major holidays as one of the external influencing factors.

From October 8, 2019 to January 27, 2020, the major holidays considered are Halloween on October 31, 2019, Thanksgiving Day on November 28, 2019, National Memorial Day on December 13, 2019, Christmas on December 25, 2019, New Years Day on January 1, 2020, Laba Festival on January 2, 2020, Chinese New Years Eve on January 24, 2020, and Spring Festival on January 25, 2020.

In this step we correct the collected high-speed train operation records. There are some missing and wrong information in the records, which will affect the computation of train delay time and delay number. Therefore, it is crucial to correct the records before judging and computing delayed trains.

To prevent the loss of observations that may be valuable, we fill in the missing values with data close to them on the date. That is because, for one train, its running status shows a certain trend, which generally remains consistent in the same period.

In the process of data collection, we find that the actual departure time is smaller than the actual arrival time in some of the operation records, which is impossible in the real train operation scene. We regard them as abnormal data. In most cases, one train runs normally according to the schedule, and the stop time at one station is also planned. Therefore, we compute the sum of actual arrival time and scheduled stop time to replace the abnormal actual departure time.

In this step, we compute the delay time of one train on its operation line and count the number of delayed trains per hour in different directions at one station. The delay of one high-speed train includes departure delay and arrival delay. So we mainly construct these two attributes in our dataset.

The original collected high-speed train operation records contains the train running dates, the name of the stations passing by the trains, station order, scheduled departure time and arrival time, actual departure time and arrival time, stop time, etc.

For one station S, the schedule defines that one train should arrive at time ({t}_{A}^{S}) and leave at time ({t}_{D}^{S}) after stopping at station S for a period of time. In most cases, the schedule is accurate, which means that most trains will depart and arrive on time. However, due to uncontrollable reasons such as extreme weather and large passenger flow, trains may not depart or arrive on time. The actual arrival and departure time are defined as ({widehat{t}}_{A}^{S}) and ({widehat{t}}_{D}^{S}). Then ({widehat{t}}_{A}^{S}-{t}_{A}^{S}) is defined as arrive not on time, ({widehat{t}}_{D}^{S}-{t}_{D}^{S}) is defined as depart not on time. Apparently, when ({widehat{t}}_{A}^{S}-{t}_{A}^{S} > 0), it shows that the train arrives late at S; ({widehat{t}}_{D}^{S}-{t}_{D}^{S} > 0) shows that the train departs late at S. When ({widehat{t}}_{A}^{S}-{t}_{A}^{S} < 0), it shows that the train arrives at S ahead of time; ({widehat{t}}_{D}^{S}-{t}_{D}^{S} < 0) shows that the train departs at S ahead of time.

According to the above definition, we add attributes departure delay and arrival delay in the high-speed train operation data. We compute the time of non-on-time arrive and depart. When these two values are bigger than 0, they represent the time of train delays. when these two values are smaller than 0, they represent the time of train departs or arrives early. It is worth noting that one train has no arrival delay at the departure station, so the value of arrival delay is always 0, and no departure delay at the terminal station, so the value of departure delay is always 0. We store the final processing results in a csv file.

The departure time of one train depends on the scheduling strategy of one station when the delay occurs. Analyzing the number of historical train delays at one station and mining the existing rules can help railway dispatching. It is also an effective way to evaluate the dispatching capacity of one station. In a word, statistic on the number of arrival and departure delayed trains at one station is very valuable.

The operation line of one train is directional, which is divided into up and down. According to China Railway, up means that the train is leaving for Beijing or running from the branch line to the trunk line (the train number is even number), down means that the train is leaving to Beijing or running from the trunk line to the branch line (the train number is odd number). From [00:00, 01:00), October 8, 2019 to [23:00, 24:00), January 27, 2020, we take one hour as a time step to compute the number of departure delays and arrival delays at 727 stations. Supposing that the train number of one train passing through station S is T, the number of trains with (T=2times n) is U, the number of trains with (T=2times (n-1)) is W, then the number of arrival delayed trains in the upward direction is (mathop{sum }limits_{i=1}^{U};left({widehat{t}}_{A}^{S}-{t}_{A}^{S} > 0right)), in downward direction is (mathop{sum }limits_{i=1}^{W};left({widehat{t}}_{A}^{S}-{t}_{A}^{S} > 0right)), the number of departure delayed trains in upward direction is (mathop{sum }limits_{i=1}^{U};left({widehat{t}}_{D}^{S}-{t}_{D}^{S} > 0right)), in downward direction is (mathop{sum }limits_{i=1}^{W};left({widehat{t}}_{D}^{S}-{t}_{D}^{S} > 0right)). We store the delay number data of the railway stations in a csv file.

In the high-speed railway network dataset, adjacent stations refer to neighboring stations on the train diagram that are not geographically close to each other (separated by multiple small stations). Since the lines in different directions between two adjacent stations may be different, resulting in different distances between them, we add direction attribute to the mileage data of adjacent stations (high-speed railway network is a directed network). That is, we calculate the mileage between adjacent stations in the upward and downward directions. According to the high-speed trains mileage data, we can get the distance ({M}_{{S}_{i}}) between one station ({S}_{i}) and departure station, and then the distance between adjacent stations is ({M}_{{S}_{i}}-{M}_{{S}_{i-1}}).

In this step, we compute the total number of the upward and downward trains, the upward and downward arrival delayed trains and departure delayed trains passing through each junction station from October 8, 2019 to January 27, 2020. The above data can be easily computed by matching Table1 and the junction station names in the high-speed train operation data.

In this step, we need to add the train direction, station type, weather related data and major holidays to the processed train operation data and delay number data of railway stations.

The direction of one train is divided into upward and downward. By judging whether the train number is odd or even, we get the operation direction and combine it with the train operation data. Station types include junction stations and non junction stations. By matching the station names in Table1 and delay number data of railway stations, we can easily judge whether one station is a junction station and combine it with the station delay data.

Weather, wind power and temperature information of 727 stations in 16 weeks are contained in the weather related data. By matching the dates and station names, we obtain the train operation data and delay data of stations with weather related factors.

The major holidays are on October 31, 2019, November 28, 2019, December 13, 2019, December 25, 2019, January 1, 2020, January 2, 2020, January 24, 2020 and January 25, 2020. We respectively add the attribute holiday to the train operation data and the delay data of stations. The value of holiday is True or False. By matching dates, we judge whether the dates in the train operation data and the delay data of stations are included in the above 8 dates.

Through the above data processing methods, we obtain the final high-speed railway network dataset.

We perform validation steps for the high-speed railway network dataset from train operation records and weather data. Please see Section Technical Validation for more details.

See the original post here:

A high-speed railway network dataset from train operation records and weather data | Scientific Data - Nature.com

Read More..

Blockchain mining and energy innovation at BSV Global Blockchain Convention – CoinGeek

In recent years, a hot topic surrounding Bitcoin and proof-of-work mining have been their energy efficiency and the environmental impacts they generate.

On day three of the BSV Global Blockchain Convention, a panel moderated by CoinGeeks Patrick Thompson, including TAAL CEO Richard Baker, Alex de Vries from Digiconomist, Prof. Robert Lee from 518 Blockchain Community, and Kurt Wuckert Jr. from GorillaPool, discussed blockchain mining, the amount of energy it consumes, how to make blockchains more efficient, and more.

Who are the panelists?

How much energy does blockchain mining consume globally?

Thompson opens the discussion by asking de Vries to tell us how much energyblockchain miningconsumes globally. It makes sense to ask him since hes spent the last half a decade or more researching this both academically and independently.

If you add all crypto assets that can be mined together, youre probably talking at least as much as all global data centers in the world combined, de Vries tells us.

This includes all of the data centers for Meta, Amazon, Google, and other corporate giants, as well as the legacy financial system. He tells us that this accounts for more than 1% of global electricity usage and that BTC and Ethereum are responsible for most of this usage.

Thompson asks whether this is problematic, and Richard Baker answers that it is because policymakers look at the headlines, and amidst an energy crisis like were in right now, they have a responsibility to decide whether industrial and household energy needs should take priority over blockchain requirements. He notes how we already see policy in the European Unionand parts of the United States, which is considering a proof-of-work ban. However, he points out that not allproof-of-work blockchainsare the same, and we need to make sure that policymakers understand the utility of blockchains like BSV.

Does mining provide real value, or is it negative for society?

Wuckert is up next, explaining the value created by mining blocks. He highlights the value created by the internet and notes that the same scrutiny over energy usage isnt applied to things like AWS or other cloud computing systems. He then explains that in large (several GB) blocks, we can fit an incredible number of transactions, but that its difficult to judge the value of those transactions because theyre qualitative in nature.

Thompson then asks Professor Lee if blockchain mining has a net positive or a net negative outcome. He responds that if certain conditions are met; usinggreen energy, delivering utility, playing by the regulations, and paying taxes, then it can be a net positive, but otherwise, it will remain a net negative. De Vries answers the same question, saying that it depends. He notes that theres still a debate around whether proof-of-work is the only way to do things like peer-to-peer transactions. Even if proof-of-work is the only way, he says it becomes hard to justify eliminating the need for a trusted third party if the latter is more energy efficient.

Wuckert carries on from this question to explain thedifferencesbetween proof-of-work and proof-of-stake. He notes that inPOS systems, whoever holds the largest number of coins is king and that over time, their power over the network only continues to grow. By contrast, in proof-of-work, inefficient and malicious actors are pushed out, and the nodes that produce the most value flourish.

Miners on the move and how to improve efficiency

Thompson notes that with several high-profile crackdowns on mining in recent times, many miners are on the move. He asks Baker to tell him where most mining is occurring today, and he explains that its shifting towards North America. Weve seen $4.5 billion worth of hardware shipped into North America to power BTC this year, Baker tells us. However, he reminds us that not all states are miner-friendly, withNew Yorkbeing a good example of a state thats hostile towards blockchain miners.

Professor Lee elaborates on how miners can become more efficient to not draw the same ire from governments. He counts finding clean, renewable energy sources, using modern, efficient mining machines, and improving the efficiency of mining itself (e.g., by reducing cost per transaction) as ways to improve things.

Baker seconds Lees points, saying that miners will be subject to ESG regulations. He points out that the BSV network is becoming super-efficient withgrowing block sizes.

Wuckert draws attention to miners using excess energy such as gas flares in oil and gas fields as one way to use energy that would otherwise have been wasted. He also says that many miners use the heat from mining for other useful things, such as heating greenhouses.

Staying on the subject of renewable energy for mining, de Vries points out that the use of renewables has actually gone backward, at least on the BTC network. After the mass exodus of miners from China, many relocated to places likeKazakhstan, where energy is still mostly coal-powered. He says he doesnt know if thats also true for BSV as he has never examined it.

Is a ban on proof-of-work an effective solution?

Thompson then poses whether a ban on proof-of-work would solve the problems many politicians deem it to have? De Vries answers that there is no perfect solution and that miners will simply move to other jurisdictions. He reiterates that whenChina banned it, the carbon footprint went up, opposite the intended consequence.

Wuckert follows this by pointing out that when one jurisdiction bans something, it creates an opportunity elsewhere. He points out that the ban in China created opportunities in North America, and if New York bans it, it will create opportunities in Texas. This regulatory arbitrage will always be a factor, so bans are largely ineffective in the grand scheme of things.

Watch the BSV Global Blockchain Convention Dubai 2022 Day 1 here:

Watch the BSV Global Blockchain Convention Dubai 2022 Day 2 here:

Watch the BSV Global Blockchain Convention Dubai 2022 Day 3 here:

New to Bitcoin? Check out CoinGeeksBitcoin for Beginnerssection, the ultimate resource guide to learn more about Bitcoinas originally envisioned by Satoshi Nakamotoand blockchain.

The rest is here:

Blockchain mining and energy innovation at BSV Global Blockchain Convention - CoinGeek

Read More..

The Best Oracle Analytics Cloud Tutorials on YouTube to Watch Right Now – Solutions Review

This list of the best Oracle Analytics Cloud tutorials on YouTube will introduce you to one of the most popular business intelligence platforms.

Oracle offers a broad range of BI and analytics tools that can be deployed on-prem or in the Oracle Cloud. The company provides traditional BI capabilities inside its Business Intelligence 12c solution. Oracle Data Visualization provides more advanced features and allows users to automatically visualize data as drag-and-drop attributes, charts, and graphs. The tool also enables users to save snapshots of an analytical moment-in-time via story points.

Learning Oracle Analytics Cloud can be a complicated process, and its not easy to know where to start. As a result, our editors have compiled this list of the best Oracle Analytics Cloud tutorials on YouTube to help you learn about the platform and hone your skills before you move on to mastering it. All of the videos here are free to access and feature guidance from some of the top minds and biggest brands in the online learning community. All of the best Oracle Analytics Cloud tutorials listed tout a minimum of 5,000 views.

Note: Dont forget to subscribe to Solutions Review on YouTube!

Author: Oracle Analytics

Description: See how Oracle Analytics for Applications combines autonomous data warehouse technology with cloud analytics to deliver personalized application analytics, domain expertise, benchmarks, and machine-learning-powered predictive insights across every line of business. This resource has been viewed more than 11,000 times.

Author: apps2fusion

Description: apps2fusions Oracle Analytics Cloud overview is worth the watch, totaling 23 minutes. Whats more is that apps2fusion offers accompanying training and resources at their site, as well as

Author: Oracle Learning

Description: Originally created with Oracle Analytics V5.4 in mind, this resource shows you how to create a new project: sort the data, create a marquee selection, keep data elements within a marquee selection, swap data elements, add new elements, change a visualization type, and save the project.

Author: Oracle Learning

Description: Originally created with Oracle Analytics V5.4 in mind, this resource shows you how to create a new dashboard. We recommend this, along with all of Oracles other short tutorials to gain a complete overview.

Tim is Solutions Review's Editorial Director and leads coverage on big data, business intelligence, and data analytics. A 2017 and 2018 Most Influential Business Journalist and 2021 "Who's Who" in data management and data integration, Tim is a recognized influencer and thought leader in enterprise business software. Reach him via tking at solutionsreview dot com.

Read the original post:

The Best Oracle Analytics Cloud Tutorials on YouTube to Watch Right Now - Solutions Review

Read More..

What Is AI? Understanding The Real-World Impact Of Artificial Intelligence – Forbes

Artificial intelligence is todays most discussed and debated technology, generating widespread adulation and anxiety, and significant government and business interest and investments. But six years after DeepMind's AlphaGo defeated a Go champion, countless research papers showing AIs superior performance over humans in a variety of tasks, and numerous surveys reporting rapid adoption, what is the actual business impact of AI?

Human intelligence communicating with the artificial kind. (Photo by Jonas Gratzer/LightRocket via ... [+] Getty Images)

2021 was the year that AI went from an emerging technology to a mature technology... that has real-world impact, both positive and negative, declared the 2022 AI Index Report. The 5th installment of the index measures the growing impact of AI in a number of ways, including private investment in AI, the number of AI patents filed, and the number of bills related to AI that were passed into law in legislatures of 25 countries around the world.

There is nothing in the report, however, about real-world impact as I would define itmeasurably successful, long-lasting and significant deployments of AI. There is also no definition of AI in the report.

Going back to the first installment of the AI Index report, published in 2017, still does not yield a definition of what the report is all about. But the goal of the report is stated upfront: the field of AI is still evolving rapidly and even experts have a hard time understanding and tracking progress across the field. Without the relevant data for reasoning about the state of AI technology, we are essentially flying blind in our conversations and decision-making related to AI.

Flying blind is a good description, in my opinion, of gathering data about something you dont define.

The 2017 report was created and launched as a project of the One Hundred Year Study on AI at Stanford University (AI100), released in 2016. That studys first section did ask the question what is artificial intelligence? only to provide the traditional circular definition that AI is what makes machines intelligent, and that intelligence is the quality that enables an entity to function appropriately and with foresight in its environment.

So the very first computers (popularly called Giant Brains) were intelligent because they could calculate, even faster than humans? The One Hundred Year Study answers Although our broad interpretation places the calculator within the intelligence spectrumthe frontier of AI has moved far ahead and functions of the calculator are only one among the millions that today's smartphones can perform. In other words, anything a computer did in the past or does today is AI.

The study also offers an operational definition: AI can also be defined by what AI researchers do. Which is probably the reason this years AI Index measures the real-world impact and progress of AI, among other indicators, by the number of citations and AI papers (defined as AI by the papers authors and indexed with the keyword AI by the publications).

Moving beyond circular definitions, however, the study provides us with a clear and concise description of what prompted the sudden frenzy and fear around a term that was coined back in 1955: Several factors have fueled the AI revolution. Foremost among them is the maturing of machine learning, supported in part by cloud computing resources and wide-spread, web-based data gathering. Machine learning has been propelled dramatically forward by deep learning, a form of adaptive artificial neural networks trained using a method called backpropagation.

Indeed, machine learning (a term coined in 1959) or teaching a computer to classify data (spam or not spam) and/or make a prediction (if you liked book X, you would love book y), is what todays AI is all about. Specifically, since its image classification breakthrough in 2012, its most recent variety or deep learning, involving data classification of very large amounts of data with numerous characteristics.

AI is learning from data. The AI of the 1955 variety, which generated a number of boom-and-bust cycles, was based on the assumption that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. That was the vision and, by and large, so far it hasnt materialized in a meaningful and sustained way, demonstrating significant real-world impact.

One serious problem with that vision was that it predicted the arrival in the not-so-distance future of a machine with human intelligence capabilities (or even surpassing humans), a prediction reiterated periodically by very intelligent humans, from Turing to Minsky to Hawking. This desire to play God, associated with the old-fashioned AI, has confounded and confused the discussion (and business and government actions) of present-day AI. This is what happens when you dont define what you are talking about (or define AI as what AI researchers do).

The combination of new methods of data analysis (backpropagation), the use of specialized hardware (GPUs) best suited for the type of calculations performed, and, most important, the availability of lots of data (already tagged and classified data used for teaching the computer the correct classification), is what led to todays AI revolution.

Call it the triumph of statistical analysis. This revolution is actually a 60-year evolution of the use of increasingly sophisticated statistical analysis to assist in a wide variety of business (or medical or governmental, etc.) decisions, actions, and transactions. It has been called data mining and predictive analytics and most recently, data science.

Last year, a survey of 30,000 American manufacturing establishments found that productivity is significantly higher among plants that use predictive analytics. (Incidentally, Erik Brynjolfsson, the lead author on that study has also been a steering committee member of the AI Index Report since its inception). It seems that its possible to find a measurable real-world Impact of AI, as long as you define it correctly.

AI is learning from data. And successful, measurable, business use of learning from data is what I would call Practical AI.

Originally posted here:

What Is AI? Understanding The Real-World Impact Of Artificial Intelligence - Forbes

Read More..