Page 3,184«..1020..3,1833,1843,1853,186..3,1903,200..»

Want to return your last online purchase? Amazon says don’t bother – The Next Web

When you receive a faulty or unwanted item, Amazon usually asks you to initiate a return for a refund or replacement. However, if the item is inexpensive, it might cost the company a lot to ship it back to the warehouse.

So, companies like Amazon and Walmart are asking customers to keep these items to save some money. According to a report from the Wall Street Journal, these online retail giants are using AI to determine if it economically makes sense to initiate a return of an unwanted item.

The report notes that the COVID-19 pandemic has accelerated this process as more people are ordering stuff online, andcompanies could increase efficiency if theres one less logistical hassle to deal with.

Target and Walmart both confirmed to the WSJ that they ask customers to keep or donate the items they dont plan on reselling. The publication also notes that according to several shipping companies, returns in the past year have jumped significantly.

CBRE Group, an investment firm, said that returns in the past holiday season could skyrocket to $70.5 billion 73% more than the average of the last five years.

Some delivery companies are working with ecommerce companies to provide means to pick items up from customers or allow them to return them at a nearby place. But as it turns out, its cheaper to just let you keep the stuff.

Read next: Sonys concept car is being tested on public roads, just 12 months after reveal

See the article here:

Want to return your last online purchase? Amazon says don't bother - The Next Web

Read More..

How Data in India Went From Being a Tool of Planning to a Tool of Control – The Wire Science

Representative image of a person authenticating their biometric data. Photo: Reuters.

The following is an excerpt from Lives of Data: Essays on Computational Cultures from India, edited by Sandeep Mertia, foreword by Ravi Sundaram, Institute of Network Cultures (Amsterdam, 2020).

This book is published under a CC BY-NC-SA 4.0 license and is available here.

It is not difficult to see what is wrong with official statistics in India. There is gap between theory and practice. There is gap between the means and the end in the absence of any clearly perceived purpose.

~ P. C. Mahalanobis, Statistics as Key Technology, 1965

Data is its own means. It is an unlimited non-rivalrous resource. Yet, it isnt shared freely. What began as a differentiator is now the model itself.

~ Nandan Nilekani, Why India needs to be a Data Democracy, 2017

Data shadows our situation. Many believe it can determine our situation. There were enthusiastic claims that Big Data would lead to a fourth industrial revolution and the end of Theory, and that it will transform how we live, work, and think. Arguably, much of the early 2010s hype around the big data revolution has already been replugged into popular narratives of artificial intelligence (AI). The media infrastructures that enliven digital data and the fast-moving claims of data revolution are now evidently more globalised and capitalised than ever before. If we look a little under the hood, techniques such as data mining have moved from the margins of techno-scientific practice to normative centres of global computing in less than two decades. How did data become so powerful, pervasive, and relatable in the first place? To understand the global momentum of the data revolution, it is crucial to inquire into the many lineages, affinities, and relations of data in context-sensitive ways.

Data Revolution(s) in Context

The contrast between the two epigraphs above is a good place to begin tracking lives of data. The first epigraph is from a lecture in 1965 at the 125th Annual Meeting of the American Statistical Association by P. C. Mahalanobis, founder of the Indian Statistical Institute (ISI) and a member of the Planning Commission, a powerful body at that time. In this lecture, he emphasised the need to establish a purposive view of statistics as a fully developed technology of a multi-discipline character. This was especially so in the underdeveloped countries where the principle of authority of the government reigned supreme over independent statistical analysis and interpretation. Mahalanobis made these observations at a time when the ISI and Indias official statistics and economic planning system were receiving global recognition for pioneering work in research, training, sample-survey methods, and economic planning. He clearly placed statistical knowledge production in the service of postcolonial nation-building. The desire to perceive a clearly defined purpose when the ISI was already at the cutting edge of large-scale data collection and processing stands in puzzling contrast to contemporary modes of data-driven governance which claim data is its own means.

The second epigraph is from an opinion piece by Nandan Nilekani, co-founder of Infosys and founding Chairman of Unique Identification Authority of India (UIDAI), the government body responsible for the worlds largest biometric database, Aadhaar. In this article he argues for the value of big data and artificial intelligence for disrupting existing patterns of information management, and cautions against data colonisation by state and global platforms.

It is important to note that what we now know as Aadhaar actually began in 1999 as an identity card project for citizens living in border states. The Rangarajan Commission, set up in January 2000 to look into the growing concern regarding the quality of data in the entire statistical system, recommended the creation of a centralised database of citizens (population register) in which every citizen would have a unique identification number. Within a few years of the UIDAI being set up in 2009, Aadhaar became a primary key linking databases of bank accounts, mobile phones, income tax returns, payment apps, email IDs, and so on, even if such a linking is not mandated by the law.

Aadhaar has afforded development of application programming interfaces (APIs), and web and mobile applications with payment interfaces demanding Aadhaar verification for government and private services across domains. Perhaps nobody in 2009 could have imagined connecting biometric data to mobile phone SIM cards. Anumeha Yadav (Chapter 7) draws on her detailed field reports to show how the project grew from select pilot implementation in 2011 to a national legal and policy imperative by 2017. She notes a growing public alertness to the importance of enrolling with Aadhaar to ensure the ratification of rights, irrespective of the unclear legal status and the widespread technological glitches in the everyday functioning of the project.

The story of Aadhaar raises questions about what counts as data, who can design its purposes, and how its means and ends are discovered. It is a story that is at once expansionist and contingent: in India, the evolution of Aadhaar indicates that we need to reflect on computational culture without prefiguring the object of computation and its potential relationship to taxonomies of social control.

To understand the shift that has taken place between the data in the mid-20th-century statistical regime of economic planning and big data aggregation and prediction in the contemporary, we need to re-examine the history of computing in India, which has been largely tethered to the IT revolution. We examine different techniques and affordances of computation in different media ecologies consisting of human computers and mass-media such as telecom in the decades before the emergence of the internet.

In Chapter 1, I explore the role of the first computers of Indiaboth human and electronicfrom the 1930s to 1960s in generating official statistics. In Chapter 2, Karl Mendonca analyses the role of computerization in the 1980s at a major advertising company involved in the cinema business, and how the company later repurposed its cinema distribution network into a courier company. In different ways, both chapters challenge the notion of a clear and stable rationale for the evolution of computers and big data.

It was not until the early 2000s that database practitioners began to seriously look at data mining as a mode of knowledge production. New concepts of scale and computational processing power emerged and developed through trade-offs and reconfigurations of statistical accuracy, localised data storage and retrievability, hardware and software load balancing, and electricity consumption.

Of particular importance was the shift from relational (structured design) to non-relational (distributed design) database management systems. Here, we must not forget the co-production of affordances, users, and publics. After all, a computer database is only one specific instance of a wider set of relationalities made durable by the thoroughly material and well-constructed craft of software engineeringeven if it is widely imagined to be abstract and mystical.

In the Indian context, while the IT industry has become symbolic of a new middle-class imaginary of technology and social mobility, the epistemic cultures of software engineering and their relations with global developments are yet to be adequately unpacked. We do not know how Indias political and infrastructural conditions affect Aadhaars database design or the development of high energy-consuming data centers for data sovereignty, to name but two examples.

In a post-colony like India, any critical engagement with data-driven knowledge production has to consider the persistent role of colonial biopolitics. It is well established that statisticsformerly termed political arithmetichave played a key role in the production of people, identity, and nation-states. From the construction of enlightenment ideas such as the individual, national populations in Europe, and the citizen in the USA, the intended and unintended consequences of counting and categorising people run far and wide.

European colonies became sites for exotic and imperious enumerative and classificatory systems framed by orientalist pedagogies that displaced and serialised existing social orders. From the inventions of fingerprinting and the enumeration of complex traditions of faith and social difference into the fixities of religious identity and objectification of caste, such a biopolitics sought to make populations knowable and governable.

Post-independence India saw an expansion of bureaucracy, official statistics, and planning. Subsequently, government and transnational businesses used data modelling of the economy and populations to understand citizenship entitlements and consumer profiles. The intersections of state and market interests after economic liberalisation in 1991 transformed the national political economy as well as the everyday cultural conditions of governance.

In particular, the entry of private digital technology vendors and consultants in state and international development projects afforded new means and incentives for collecting and analysing data. Supporters of the Aadhaar project often claim that the state is a much more benign collector of data than companies such as Google and Facebook. Putting questions of veracity aside, the implications of this distinction are suggestive. The purported commensurability between data imaginaries and practices of Indias welfare state and those of big technology companies widens the scope of inquiry into the politics of data-driven governance and bureaucracy.

From state-owned biometrics to state-promoted transnational mobile apps, the contemporary (surveillance-friendly) road between the ideology of the state and that of popular digital media is punctuated by diverse and distributed data-driven pathways.

At one level, the shift from colonial fingerprinting to contemporary biometric technologies shows some continuity in terms of tactics of governance and subjectification of bodies. If we look closely though, the machinic-readability of fingerprints opens new analytical challenges for theorising governmentality. The contemporary modes of data-driven subjectification are deeply entangled with proliferation of digital technologies of identification in governance, finance, media, and consumer products across developmental and business models. How can we map this expansion and proliferation in sociotechnically specific ways?

From navigating the nudge marketing of discount codes on mobile payment apps to facing new determinations of citizenship and identity through myriad paper-based and digital documents, among other things, the emergent mutations of power, subjectivity, and data demand a closer look into the design and material form of media. This is particularly challenging in conditions of fragmented digital infrastructures, where diverse intermedial forms emerge and coalesce in everyday practices for bypassing the lack of end-to-end connectivity and formal access.

Sandeep Mertia is a PhD Candidate at the Department of Media, Culture, and Communication, and Urban Doctoral Fellow at New York University. He is an ICT engineer by training, and former Research Associate at The Sarai Programme, Centre for the Study of Developing Societies.

Lives of Data emerged from research projects and workshops at the Sarai programme, Centre for the Study of Developing Societies. It seeks to better understand the status of data objects, relationalities, and difference in computational cultures. A critical focus on India necessitates pluralistic vantage points for examining the contemporary global discourse of data revolution in relation to the enduring legacies of colonialism and 20th-century modernisation programs. From state-supported technological boosterism of its digital superpower status to everyday lives of over a billion people in one of the most diverse and unequal societies in the world, Indias sociotechnical conditions assemble deeply contrasting lives of data.

This collection of essays features a diverse group of interdisciplinary scholars and practitioners, engaging the emergence, limits, potentialities, politics, practices, and consequences of data-driven knowledge production and circulation. Encompassing history, anthropology, science and technology studies (STS), media studies, civic technology, data science, digital humanities, and journalism, the essays open up possibilities for a truly situated global and sociotechnically specific understanding of data, computing, and society.

View post:

How Data in India Went From Being a Tool of Planning to a Tool of Control - The Wire Science

Read More..

Database trends: The rise of the time-series database – VentureBeat

The problem: Your mobile app just went viral, and youve got a boatload of new users flooding your servers with a bazillion packets of data. How can you store this firehose of sensor data? Is there a way to deliver some value with statistical analysis? Can you do this all on a budget with a well-tuned database that wont drive the price of supporting the widget through the roof?

The time-series database (TSDB) is designed to handle these endless streams, and its one of the most notable current trends in database technology. It gives developers a tool for tracking the bits flowing from highly interactive websites and devices connected to the internet. It adds strong algorithms for fast queries for statistical analysis, which makes it popular for tackling problems like online ad selection and smart device support.

The TSDB has grown in popularity in recent years, and last year it was the fastest-growing type of database in the enterprise, largely because of the growing number of use cases for it. After all, time-series data is a sequence of data points collected over time, giving you the ability to track changes over that period and thats what you need to do if youre running sophisticated transactions like advertising, ecommerce, supply chain management, and more.

What are some other major use cases for a TSDB?

First, datasets are large and getting larger. Log files are measured in petabytes now, and theyre growing. Devices from the so-called internet of things (IoT) are proliferating, and theyre often designed to rely on a central service for analysis and presentation. Sense.com, for instance, collects information on electrical consumption in houses millions of times per second. When these bits are reported, Sense.coms central database must store enough data to be useful but not enough to overwhelm the storage.

The time-series datasets often have fewer relationships between data entries in different tables that require transaction-based locking to avoid inconsistencies. Most of the data packets contain a timestamp, several sensor readings, and not much more.

This allows special indices to speed queries like the number of events in a day, week, or other time period. Good time-series indices can offer quick answers to statistical questions about ranges of data.

The databases can also offer some support because many of the maintenance chores are regular and easy to automate. The databases can automatically dispose of old data while delivering only fresh statistics. While standard databases are designed to store data forever, time-series databases can be configured to give data elements a specific time to live. Others will use a round-robin algorithm to store a fixed set.

As time goes by, the databases deploy specialized compression functions that will store time-series data in less space. If sensor readings dont change from millisecond to millisecond, theres no reason to store another copy of the same value. Timescale.com, for instance, boasts of 94%-97% saving in storage thanks to compression algorithms tuned to the regular data patterns.

Tracking how people, machines, and organizations behave over time is the key to customization. Time-series databases that optimize the collection and analysis of time-series data open up the opportunity to provide business models that adjust and avoid one-size-fits-all standardization. Algorithms that place advertising, for instance, can look at recent behavior. Intelligent devices like thermostats can search through events and understand what people want at different times of the day.

All major databases have long had fields that store dates and times. All of the traditional queries for searching or tabulating the data still work with these entries. Oracle databases, for example, have been popular on Wall Street for storing regular price quotes. They arent optimized like the new databases, but that doesnt mean that they cant answer the questions with a bit more computational power. Sometimes its cheaper to buy bigger machines than switch to a new database.

Some applications may collect a variety of data values, and some may be best suited to the stability of a traditional database. Banking applications, for instance, are filled with ledger transactions that are just time-series tables of the total deposits. Still, bank developers can be some of the most conservative, and they may prefer a legacy database with a long history over a new tool with better efficiencies.

Sometimes the traditional companies are rolling out newer models that compete. Oracle, for instance, is also tuning its NoSQL database to search and analyze the time-series data streams from sensors and other real-time sources. The API will maintain a running collection of fresh data points and enforce time-to-life control over the data to avoid overloading the storage.

The newer data analysis engines often include tools specifically built for time-series data. For example, Microsofts Data Mining tool for its SQL Server has a collection of functions that can look at historical data and predict future trends.

The cloud companies are also adding data storage services for this market. AWS, for example, launched its Timestream service, a tool optimized for IoT data. It will also integrate with the rest of the AWS stack through standard pathways like the Lambda functions, as well as customized ones for machine learning options like SageMaker.

New companies see an opportunity through focusing on adding the right amount of indexing and post-processing to make queries fast and effective.

InfluxDB began as an open source project and is now available as either a standalone installation or an elastic serverless option from the InfluxDB Cloud. The companys Flux query language simplifies tasks like computing the moving averages of the data stream. The language is functional and designed to be easily composable so queries can be built up from other queries.

Timescale DB is a separate engine that is fully integrated with PostgreSQL for tasks that might need traditional relational tables and time-series data. The companys benchmarks boast of speeding up ingesting data by a factor of 20. The queries for searching the data or identifying significant values like maxima can be thousands of times faster.

Prometheus stores all data with a timestamp automatically and provides a set of standard queries for analyzing changes in the data. Its PromQL bears some resemblance to the emerging data format for queries, GraphQL. This makes it simple for developers to set up alerts that could be triggered by data anomalies.

Redis created a special module for ingesting the rapid data flows into the database. The indexing routines build a set of average statistics about the datas evolution. To save memory, it can also downsample or aggregate the elements.

Kdb+, a database thats the foundation of the Kx platform, maintains a connection with relational databases that makes it simpler to work with some of the relational schema that dominate some applications. The streaming analytics built by the database offer both traditional statistics and also some machine learning algorithms.

Open source projects and startups have many of the same goals as other tech projects. They all want to find ways to handle bigger data streams with more complicated analytics that are run in more efficient silos bigger, faster, smarter, and cheaper.

Beyond that, groups are starting to think about the long-term custodial responsibilities that the endless streams might require. The Whisper open source database, for instance, is designed to gracefully turn high-resolution data that might be compiled from a rapid stream into a lower-resolution, historical summary that can be stored and studied more efficiently over time. The goal is to save space while still providing useful summaries. The database is, in essence, deliberately saving summaries and disposing of the information that was originally entrusted to it.

The companies are debating the language used by developers to write queries. QuestDB is revisiting and extending SQL by adding features for grouping and analyzing data by time. It believes that SQL is a language that will live on, in part because so many DBAs know it.

Other companies are building specialized languages that are closer to functional programming languages. For example, InfluxDBs Flux language encourages developers to compose their solutions out of multiple smaller, reusable functions.

The companies will also be pushing to extend the presentation layer. Many of the databases are already loosely coupled with graphical dashboards like Grafana. These connections will grow deeper, and in many cases the tools will effectively merge with the time-series database. Matomo, for instance, is presented as a product for tracking visitors to websites.

In a sense, all databases are time-series databases because they maintain a log of the transactions that build up the table. The real question is which applications need to track how data changes over time. Many traditional databases were concerned only with the current state. They tracked, for instance, how many empty seats were left on the airplane. Thats the most important detail for selling tickets.

But sometimes there are hidden opportunities in even these applications. For instance, tracking when the tickets are sold can help pricing strategies in the future because airlines can know whether demand is running ahead or behind historical norms. In this sense, even traditional applications that dont seem to need to track changes over time might be improved. The time-series databases might just be an opportunity.

Read the original post:

Database trends: The rise of the time-series database - VentureBeat

Read More..

Burglar alarm market Emerging Opportunities, Revenue Analysis and Growth Projections – KSU | The Sentinel Newspaper

This report provides a regional analysis of the market. It focuses on manufacturers, suppliers, segmentation, application, major players and more. Changing market dynamics such as market drivers, market restraints and market threats are provided in this research report. It also includes recent industry trends and developments with the competitive landscape. This report studies different attributes of business such as recent developments, technological platforms, tools and techniques that help in understanding the existing market. In addition, this report discusses the key drivers influencing the market growth, opportunities, challenges and the risks faced by key players and the market as a whole.

This research study assesses the dynamics and movement of the global market across various geographies. The study on the global market includes qualitative factors such as value chain analysis, drivers and restraints. Actual market sizes and forecasts have been provided for all the above-mentioned segments. The study includes the profiles of key players in the market with a significant global and regional presence.

Burglar alarm marketis expected to reachUSD 5.53 billion by 2026from USD 3.58 billion in 2018 and is projected to grow at a CAGR of 5.6% in the forecast period of 2019 to 2026. The upcoming market report contains data for historic years 2017, the base year of calculation is 2018 and the forecast period is 2019 to 2026.

Get Exclusive Sample Report + All Related Graphs & ChartsHere@https://www.databridgemarketresearch.com/request-a-sample?dbmr=global-burglar-alarm-market&AM

Major Industry Competitors:Burglar Alarm Market

The renowned players in burglar alarm market are Inovonics Wireless Corporation, Siemens AG, AEON SYSTEMS, INC., ASSA ABLOY, Johnson Controls, Napco Security Technologies, Inc., Securitas, Collins Aerospace, Hangzhou Hikvision Digital Technology Co.,Ltd., Honeywell International Inc., ADT, Banham Group and RISCO Group.

Revealing the Competitive scenario

In todays competitive world you need to think one step ahead to chase your competitors, our research offers reviews about key players, major collaborations, merger & acquisitions along with trending innovation and business policies to present better insights to drive the business into right direction

Key Segmentation: Burglar Alarm Market

By System & Hardware (Central monitoring receiver, Remote terminal unit and Alarm sensors), End-users (Small and Medium Enterprises, Residential)

Regional Outlook

North America(US, Canada, Mexico)

South America(Brazil, Argentina, rest of south America)

Asia and Pacific region(Japan, china, India, New Zealand, Vietnam, Australia, Singapore, Malaysia, Thailand, Indonesia, Philippines, etc)

Middle east and Africa(UAE, Saudi Arabia, Oman, etc)

Europe(Germany, Italy, U.K, France, Spain, Netherlands, Belgium, Switzerland, Russia, etc)

Rapid Business Growth Factors

In addition, the market is growing at a fast pace and the report shows us that there are a couple of key factors behind that. The most important factor thats helping the market grow faster than usual is the tough competition.

What are the major market growth drivers?

Increasing demand for detecting unwarranted intrusion of personnel and prevent theft

Growing demand in telecommunication network and PSTN

Research strategies and tools used of Burglar Alarm Market:

This Burglar Alarm market research report helps the readers to know about the overall market scenario, strategy to further decide on this market project. It utilizes SWOT analysis, Porters Five Forces Analysis and PEST analysis.

Key Points of this Report:

The depth industry chain include analysis value chain analysis, porter five forces model analysis and cost structure analysis

The report covers North America and country-wise market of Burglar Alarm

It describes present situation, historical background and future forecast

Comprehensive data showing Burglar Alarm capacities, production, consumption, trade statistics, and prices in the recent years are provided

The report indicates a wealth of information on Burglar Alarm manufacturer

Burglar Alarm market forecast for next five years, including market volumes and prices is also provided

Raw Material Supply and Downstream Consumer Information is also included

Any other users requirements which is feasible for us

Key Developments in the Market:

In 2017, Google Nest launched a smart burglar alarm, which is beneficial in providing facial recognition and other tools. It also offered built in video camera and speakers with new alarm system software.

In 2018, Rings launched a home security system with help of burglar smart alarm that offers cameras and professional monitoring service to detect the atmosphere of the home.

Some extract from Table of Contents

Overview of Global Burglar Alarm Market

Burglar Alarm Size (Sales Volume) Comparison by Type

Burglar Alarm Size (Consumption) and Market Share Comparison by Application

Burglar Alarm Size (Value) Comparison by Region

Burglar Alarm Sales, Revenue and Growth Rate

Burglar Alarm Competitive Situation and Trends

Strategic proposal for estimating availability of core business segments

Players/Suppliers, Sales Area

Analyze competitors, including all important parameters of Burglar Alarm

Global Burglar Alarm Manufacturing Cost Analysis

The most recent innovative headway and supply chain pattern mapping

Thanks for reading this article; you can also get individual chapter wise section or region wise report version like North America, Europe, MEA or Asia Pacific.

Table Of Contents Is AvailableHere@https://www.databridgemarketresearch.com/toc?dbmr=global-burglar-alarm-market&AM

Why Is Data TriangulationImportantIn Qualitative Research?

This involves data mining, analysis of the impact of data variables on the market, and primary (industry expert) validation. Apart from this, other data models include Vendor Positioning Grid, Market Time Line Analysis, Market Overview and Guide, Company Positioning Grid, Company Market Share Analysis, Standards of Measurement, Top to Bottom Analysis and Vendor Share Analysis. Triangulation is one method used while reviewing, synthesizing and interpreting field data. Data triangulation has been advocated as a methodological technique not only to enhance the validity of the research findings but also to achieve completeness and confirmation of data using multiple methods

About Data Bridge Market Research:

An absolute way to forecast what future holds is to comprehend the trend today!

Data Bridge set forth itself as an unconventional and neoteric Market research and consulting firm with unparalleled level of resilience and integrated approaches. We are determined to unearth the best market opportunities and foster efficient information for your business to thrive in the market. Data Bridge endeavors to provide appropriate solutions to the complex business challenges and initiates an effortless decision-making process.

Data Bridge adepts in creating satisfied clients who reckon upon our services and rely on our hard work with certitude. We are content with our glorious 99.9 % client satisfying rate.

Contact:

Data Bridge Market ResearchUS: +1 888 387 2818UK: +44 208 089 1725Hong Kong: +852 8192 7475Email:Corporatesales@databridgemarketresearch.com

View post:

Burglar alarm market Emerging Opportunities, Revenue Analysis and Growth Projections - KSU | The Sentinel Newspaper

Read More..

Iran’s government blames smog and massive blackouts on bitcoin mining operations in its country – Boing Boing

Iran is experiencing wide-scale blackouts and unhealthy levels of smog. The government says bitcoin mining operations, which use a great amount of electricity, are to blame. The government has already shut down at least one licensed mining operation and is now going after rogue mining operations. People running the bitcoin mines in Iran are say the government is simply blaming the miners for the problems and that the cause of the blackouts and smog lies elsewhere.

From The Washington Post:

"The miners have nothing to do with the blackouts," Ziya Sadr, a cryptocurrency researcher in Tehran, told The Washington Post. "Mining is a very small percentage of the overall electricity capacity in Iran."

He added, "It is a known fact that the mismanagement and the very terrible situation of the electricity grid in Iran and the outdated equipment of power plants in Iran can't support the grid."

The government itself has pointed to cheap electricity rates, enabled by government subsidies, as another major cause of the blackouts. A member of the board of the Iranian Blockchain Associationtold IRNAthat the electricity used by cybercurrency miners in Iran was estimated to be about equal to the electricity lost by the network during distribution.

One quibble with The Washington Post piece. It states, "Decentralized cryptocurrencies rely on high-powered computers to verify that transactions are legitimate by solving complicated mathematical problems." That's incorrect on two counts. First, transactions are not verified by solving complicated mathematical problems. Miners compete for rewards by trying to be the first to guess a random number. There's nothing complicated about guessing it. And second, cryptocurrencies do not "rely on high-powered computers to verify that transactions are legitimate." The reason miners use high powered computers is that the miners are competing with each other to make as many guesses as possible, and the more power their computers, the faster they can make guesses. But if 99.9% of all the computing power devoted to mining was shut down, the remaining miners would still be able to verify the transactions at the same rate they are verified now, because the bitcoin network automatically adjusts the difficulty of guessing the random number to match the computing power devoted to guessing it.

Continue reading here:

Iran's government blames smog and massive blackouts on bitcoin mining operations in its country - Boing Boing

Read More..

How Data in India Went From Being a Tool of Economic Planning to Big Data Aggregation – The Wire

The following is an excerpt from Lives of Data: Essays on Computational Cultures from India, edited by Sandeep Mertia, foreword by Ravi Sundaram, Institute of Network Cultures (Amsterdam, 2020).

This book is published under the Creative Commons Attribution-NonCommercial- NoDerrivatives 4.0 International (CC BY-NC-SA 4.0) license and is available here.

It is not difficult to see what is wrong with official statistics in India. There is gap between theory and practice. There is gap between the means and the end in the absence of any clearly perceived purpose.

~ P. C. Mahalanobis, Statistics as Key Technology, 1965

Data is its own means. It is an unlimited non-rivalrous resource. Yet, it isnt shared freely. What began as a differentiator is now the model itself.

~ Nandan Nilekani, Why India needs to be a Data Democracy, 2017

Data shadows our situation. Many believe it can determine our situation. There were enthusiastic claims that Big Data would lead to a fourth industrial revolution and the end of Theory, and that it will transform how we live, work, and think. Arguably, much of the early 2010s hype around the big data revolution has already been replugged into popular narratives of artificial intelligence (AI). The media infrastructures that enliven digital data and the fast-moving claims of data revolution are now evidently more globalized and capitalized than ever before. If we look a little under the hood, techniques such as data mining have moved from the margins of techno-scientific practice to normative centers of global computing in less than two decades. How did data become so powerful, pervasive, and relatable in the first place? To understand the global momentum of the data revolution, it is crucial to inquire into the many lineages, affinities, and relations of data in context-sensitive ways.

Lives of Data: Essays on Computational Cultures from India Ed. Sandeep MertiaInstitute of Network Cultures (Amsterdam, 2020)

Data Revolution(s) in Context

The contrast between the two epigraphs above is a good place to begin tracking lives of data. The first epigraph is from a lecture in 1965 at the 125th Annual Meeting of the American Statistical Association by P. C. Mahalanobis, founder of the Indian Statistical Institute (ISI) and a member of the Planning Commission, a powerful body at that time. In this lecture, he emphasized the need to establish a purposive view of statistics as a fully developed technology of a multi-discipline character. This was especially so in the underdeveloped countries where the principle of authority of the government reigned supreme over independent statistical analysis and interpretation. Mahalanobis made these observations at a time when the ISI and Indias official statistics and economic planning system were receiving global recognition for pioneering work in research, training, sample-survey methods, and economic planning. He clearly placed statistical knowledge production in the service of postcolonial nation-building. The desire to perceive a clearly defined purpose when the ISI was already at the cutting edge of large-scale data collection and processing stands in puzzling contrast to contemporary modes of data-driven governance which claim data is its own means.

The second epigraph is from an opinion piece by Nandan Nilekani, co-founder of Infosys and founding Chairman of Unique Identification Authority of India (UIDAI), the government body responsible for the worlds largest biometric database, Aadhaar. In this article he argues for the value of big data and artificial intelligence for disrupting existing patterns of information management, and cautions against data colonization by state and global platforms. It is important to note that what we now know as Aadhaar actually began in 1999 as an identity card project for citizens living in border states. The Rangarajan Commission, set up in January 2000 to look into the growing concern regarding the quality of data in the entire statistical system, recommended the creation of a centralized database of citizens (population register) in which every citizen would have a unique identification number. Within a few years of the UIDAI being set up in 2009, Aadhaar became a primary key linking databases of bank accounts, mobile phones, income tax returns, payment apps, email IDs, and so on, even if such a linking is not mandated by the law. Aadhaar has afforded development of application programming interfaces (APIs), and web and mobile applications with payment interfaces demanding Aadhaar verification for government and private services across domains. Perhaps nobody in 2009 could have imagined connecting biometric data to mobile phone SIM cards. Anumeha Yadav (Chapter 7) draws on her detailed field reports to show how the project grew from select pilot implementation in 2011 to a national legal and policy imperative by 2017. She notes a growing public alertness to the importance of enrolling with Aadhaar to ensure the ratification of rights, irrespective of the unclear legal status and the widespread technological glitches in the everyday functioning of the project. The story of Aadhaar raises questions about what counts as data, who can design its purposes, and how its means and ends are discovered. It is a story that is at once expansionist and contingent: in India, the evolution of Aadhaar indicates that we need to reflect on computational culture without prefiguring the object of computation and its potential relationship to taxonomies of social control.

To understand the shift that has taken place between the data in the mid-20th-century statistical regime of economic planning and big data aggregation and prediction in the contemporary, we need to re-examine the history of computing in India, which has been largely tethered to the IT revolution. We examine different techniques and affordances of computation in different media ecologies consisting of human computers and mass-media such as telecom in the decades before the emergence of the internet. In Chapter 1, I explore the role of the first computers of Indiaboth human and electronicfrom the 1930s to 1960s in generating official statistics. In Chapter 2, Karl Mendonca analyses the role of computerization in the 1980s at a major advertising company involved in the cinema business, and how the company later repurposed its cinema distribution network into a courier company. In different ways, both chapters challenge the notion of a clear and stable rationale for the evolution of computers and big data.

It was not until the early 2000s that database practitioners began to seriously look at data mining as a mode of knowledge production. New concepts of scale and computational processing power emerged and developed through trade-offs and reconfigurations of statistical accuracy, localized data storage and retrievability, hardware and software load balancing, and electricity consumption. Of particular importance was the shift from relational (structured design) to non-relational (distributed design) database management systems. Here, we must not forget the co-production of affordances, users, and publics. After all, a computer database is only one specific instance of a wider set of relationalities made durable by the thoroughly material and well-constructed craft of software engineeringeven if it is widely imagined to be abstract and mystical. In the Indian context, while the IT industry has become symbolic of a new middle-class imaginary of technology and social mobility, the epistemic cultures of software engineering and their relations with global developments are yet to be adequately unpacked. We do not know how Indias political and infrastructural conditions affect Aadhaars database design or the development of high energy-consuming data centers for data sovereignty, to name but two examples.

In a post-colony like India, any critical engagement with data-driven knowledge production has to consider the persistent role of colonial biopolitics. It is well established that statisticsformerly termed political arithmetichave played a key role in the production of people, identity, and nation-states. From the construction of enlightenment ideas such as the individual, national populations in Europe, and the citizen in the USA, the intended and unintended consequences of counting and categorizing people run far and wide. European colonies became sites for exotic and imperious enumerative and classificatory systems framed by orientalist pedagogies that displaced and serialized existing social orders. From the inventions of fingerprinting and the enumeration of complex traditions of faith and social difference into the fixities of religious identity and objectification of caste, such a biopolitics sought to make populations knowable and governable.

Post-independence India saw an expansion of bureaucracy, official statistics, and planning. Subsequently, government and transnational businesses used data modelling of the economy and populations to understand citizenship entitlements and consumer profiles. The intersections of state and market interests after economic liberalization in 1991 transformed the national political economy as well as the everyday cultural conditions of governance. In particular, the entry of private digital technology vendors and consultants in state and international development projects afforded new means and incentives for collecting and analyzing data. Supporters of the Aadhaar project often claim that the state is a much more benign collector of data than companies such as Google and Facebook. Putting questions of veracity aside, the implications of this distinction are suggestive. The purported commensurability between data imaginaries and practices of Indias welfare state and those of big technology companies widens the scope of inquiry into the politics of data-driven governance and bureaucracy. From state-owned biometrics to state-promoted transnational mobile apps, the contemporary (surveillance-friendly) road between the ideology of the state and that of popular digital media is punctuated by diverse and distributed data-driven pathways.

Representative image of an Aadhaar card. Photo: PTI

At one level, the shift from colonial fingerprinting to contemporary biometric technologies shows some continuity in terms of tactics of governance and subjectification of bodies. If we look closely though, the machinic-readability of fingerprints opens new analytical challenges for theorizing governmentality. The contemporary modes of data-driven subjectification are deeply entangled with proliferation of digital technologies of identification in governance, finance, media, and consumer products across developmental and business models. How can we map this expansion and proliferation in sociotechnically specific ways? From navigating the nudge marketing of discount codes on mobile payment apps to facing new determinations of citizenship and identity through myriad paper-based and digital documents, among other things, the emergent mutations of power, subjectivity, and data demand a closer look into the design and material form of media. This is particularly challenging in conditions of fragmented digital infrastructures, where diverse intermedial forms emerge and coalesce in everyday practices for bypassing the lack of end-to-end connectivity and formal access.

Sandeep Mertia is a PhD Candidate at the Department of Media, Culture, and Communication, and Urban Doctoral Fellow at New York University. He is an ICT engineer by training, and former Research Associate at The Sarai Programme, Centre for the Study of Developing Societies.

Lives of Data emerged from research projects and workshops at the Sarai programme, Centre for the Study of Developing Societies. It seeks to better understand the status of data objects, relationalities, and difference in computational cultures. A critical focus on India necessitates pluralistic vantage points for examining the contemporary global discourse of data revolution in relation to the enduring legacies of colonialism and 20th-century modernisation programs. From state-supported technological boosterism of its digital superpower status to everyday lives of over a billion people in one of the most diverse and unequal societies in the world, Indias sociotechnical conditions assemble deeply contrasting lives of data.

This collection of essays features a diverse group of interdisciplinary scholars and practitioners, engaging the emergence, limits, potentialities, politics, practices, and consequences of data-driven knowledge production and circulation. Encompassing history, anthropology, science and technology studies (STS), media studies, civic technology, data science, digital humanities, and journalism, the essays open up possibilities for a truly situated global and sociotechnically specific understanding of data, computing, and society.

See the original post:

How Data in India Went From Being a Tool of Economic Planning to Big Data Aggregation - The Wire

Read More..

Theia 456 is a stretched-out stream of sibling stars – EarthSky

Illustration depicting star streams in the Milky Way (not the Theia streams) via NASA/ Northwestern University.

Astronomers have identified many more or less spherical clumps of stars born together and still traveling together through space. We call them open star clusters; many are well known as beautiful places in the sky to see through binoculars or a small telescope. On January 15, 2021, at a virtual session during this weeks meeting of the American Astronomical Society, astronomers presented new research on a different sort of collection of sibling stars. Theia 456 isnt an open cluster. Its whats called a stellar stream, a group of stars stretched out linearly, in this case over some 500 light-years. The astronomers studying Theia 456 combined multiple datasets including those captured by ESAs Gaia satellite, which is carefully tracking the positions (and hence movements) of over a billion stars over a five-year period. They found that despite its stretched-out shape all of Theia 456s 468 stars are indeed siblings, born at the same time and traveling in the same direction across the sky.

That means our understanding of how sibling stars can exist with one another, within the confines of our Milky Way galaxy, is evolving.

The 2021 lunar calendars are here! Order yours before theyre gone.

Jeff Andrews of Northwestern University is a member of the research team and the presenter of the new information about Theia 456 at the AAS meeting last week. He said in a statement:

Most stellar clusters are formed together. Whats exciting about Theia 456 is that its not a small clump of stars together. Its long and stretched out. There are relatively few streams that are nearby, young and so widely dispersed.

Heres an ordinary open star cluster, what we typically think of when speaking of sibling stars. This cluster is called the Pleiades. Its stars were born from a single cloud of gas; you can see here that theyre still surrounded by a veil of nebulosity. Gazing at the sky with just your eye, you can easily glimpse the tiny, distinct dipper-like shape of the Pleiades. Star streams cant be seen in that same sense. Theyre found by mining data from spacecraft like Gaia. Astronomer Fred Espenak captured this photo in 2018. Read more about open star clusters like the Pleiades.

Researchers believe long strings of stars started out as tight open clusters. They believe these particular clusters were gradually ripped apart and stretched out into long streams by tidal forces within the Milky Way. Andrews said:

As weve started to become more advanced in our instrumentation, our technology and our ability to mine data, weve found that stars exist in more structures than clumps. They often form these streams across the sky. Although weve known about these for decades, were starting to find hidden ones.

In other words, until recently, most known stellar streams didnt consist of young, sibling stars orbiting in the flat disk or plane of our Milky Way galaxy. A look at a short list of known stellar streams in the Milky Way on Wikipedia, for example, shows their origins as globular star clusters (old, symmetrical clusters found in our galaxys halo) or dwarf galaxies orbiting outside the Milky Ways flat plane. These sorts of stellar streams were discovered by telescopes pointed above or below the Milky Ways plane.

Theia 456 does dwell within the Milky Ways flat plane or disk. These astronomers referred to it as hidden because its easily lost within the galaxys backdrop of 400 billion disk stars. Andrews commented:

We tend to focus our telescopes in other directions because its easier to find things. Now were starting to find these streams in the [disk of the] galaxy itself. Its like finding a needle in a haystack. Or, in this case, finding a ripple in an ocean.

Theia 456 isnt the only one of its kind, although it may be the only one so far thats been studied so extensively. In fact, these astronomers said:

The Milky Way houses 8,292 recently discovered stellar streams, all named Theia.

Heres a schematic of our Milky Way galaxy. Most known stellar streams lie outside the galaxys flat disk. Theia 456 lies within the disk of the galaxy. Image via Cosmos.

These astronomers statement explained:

Identifying and examining these structures is a data science challenge. Artificial intelligence algorithms combed huge datasets of stellar data in order to find these structures. Then Andrews developed algorithms to cross-reference those data with preexisting catalogs of documented stars iron abundances.

Andrews and his team found that the 500 stars within Theia 456 had similar iron abundances, which means that 100 million years ago the stars likely formed together.

They also found that the stars in Theia 456 are moving together in the same direction. Andrews said:

If you know how the stars are moving, then you can backtrack to find where the stars came from. As we rolled the clock backwards, the stars became closer and closer together. So, we think all these stars were born together and have a common origin.

In addition to date from the Gaia satellite, these astronomers used data from NASAs Transiting Exoplanet Survey Satellite (TESS) and from the Zwicky Transient Facility. Andrews said that combining datasets and data mining is essential to understanding the universe around us:

You can only get so far with one dataset. When you combine datasets, you get a much richer sense of whats out there in the sky.

Bottom line: Researchers have discovered that Theia 456 a stellar stream, or linear collection of stars moving together through space contains 468 stars born at the same time. This tells us that stars born together can move together in clumps (open star clusters) or in streams like Theia 456.

Source: Theia 456: A New Stellar Association in the Galactic Disk, presented as part of a session on the modern Milky Way at the 237th meeting of the American Astronomical Society.

Read more about Gaias 2nd data release: 1.7 billion stars!

Read more about Gaias 3rd data release: Gazing toward the galactic anticenter

Via Northwestern

View original post here:

Theia 456 is a stretched-out stream of sibling stars - EarthSky

Read More..

Ping An Uses Artificial Intelligence to Drive New ESG Investment Strategies – PRNewswire

HONG KONG and SHANGHAI, Jan. 14, 2021 /PRNewswire/ -- The Ping An Digital Economic Research Center (PADERC), a member of Ping An Insurance (Group) Company of China, Ltd. (HKEx:2318; SSE:601318), has created four new investment strategies for environmental, social and corporate governance (ESG) investing using Ping An's proprietary CN-ESG data for China A-shares, in light of surging demand in China for ESG ratings and data with wider coverage and a better fit for China's market.

Ping An ESG framework aligns with international standards and Chinese regulations

The investment strategies detailed in the report, "Applications of Ping An CN-ESG Data and Framework in Quantitative Investment Strategy", use the proprietary CN-ESG database and scoring framework developed by the Ping An Group. Ping An was the first asset owner in China to sign the United Nations Principles for Responsible Investment. The framework leverages Ping An's expertise in finance and technology and aligns with international standards as well as guidelines from Chinese regulators to incorporate material topics for Chinese companies.

With technologies such as web crawlers, data mining, machine learning, knowledge graphs, natural language processing (NLP) and satellite remote sensing, the CN-ESG system can verify ESG disclosure-based data as well as mine non-disclosure-based data to provide investors with richer multi-dimensional information.

PADERC's report provides an in-depth analysis on the data characteristics, effectiveness, and strategy back-testing results of the CN-ESG database and scoring framework, which covers more than 3,900 listed companies in the China A-share market with five years of historical data (2015-2019). The framework can provide quarterly results that are further adjusted based on news sentiment scores in real-time compared to annual or semi-annual updates from most ESG rating providers.

ESG factors independent of financial factors

PADERC found the Ping An's CN-ESG scores among A-share companies is close to a normal distribution. The factor correlation test results show that scores have notable performance of quality factors. The overall correlation between CN-ESG factors and traditional financial factors is generally low, showing high levels of independence of ESG factors, which indicates these can provide new data and viewpoints for investment decisions.

The results of the factor layered test show that Ping An CN-ESG factors have a relatively strong positive screening effect on the Chinese Securities Index (CSI) 300 and CSI 800 stock pools. The financial window dressing factors constructed by evaluating the quality and authenticity of the company's financial data yielded 11.61% of long-short gains since 2015.

ESG investment strategies that balance excess returns with ESG objectives

Based on CN-ESG data, PADERC constructed four types of ESG investment strategies that use artificial intelligence (AI) to balance excess investment returns and ESG investment targets:

1) Ping An AI-ESG Selected 100 Strategy: This positive screening strategy selects companies with the highest ESG scores. Based on the broader CSI 800 stock pool, it can better leverage additional information from ESG scores. This strategy achieved an annualized excess return of 4.44%. The annual weighted average ESG score quantile of the portfolio is 94.2% among the benchmark stock pool.

2) Ping An AI-ESG Enhancement Strategy: On the basis of ESG scores-based positive screening, PADERC added ESG factors to its Ping An Digital Economic Research Center 500+ No.1 AI Stock Selection Strategy and there was notable excess return. The AI stock selection strategy is based on linear and non-linear algorithms to capture complex market structures to predict the excess return of individual stocks. The Ping An AI-ESG Enhancement Strategy has an annualized excess return of 16.34%, and the annual weighted average ESG score quantile of the portfolio is 78.7% among the benchmark stock pool.

3) CSI 300 ESG Style Index Series:The CSI 300 ESG Growth Index explores the growth value of the CSI 300 stocks, while controlling its tail risks. The CSI 300 ESG Low Volatility Index reinforces the stability features of ESG investment in both the short and long term. The ESG growth index achieved annualized excess returns of 5.67% and the low volatility index achieved 8.61% relative to the benchmark. The annual weighted average ESG score quantile of the portfolios are 75.1% (ESG growth index) and 73.1% (low volatility index) relative to the benchmark stock pool.

Further testing of excess returns shows that the above active management strategies have almost all achieved excess returns in adverse market conditions, including bond crises, annual bear market downturns, Sino-US trade war, and COVID-19, verifying the effectiveness of ESG factors in challenging environments.

4) AI-ESG MAX Strategy: ESG enhancement of mainstream ETFs enables investors to gradually incorporate ESG concepts into their investing process without changing their traditional investing habits. Based on the CSI 300, controlling for sector deviation, this strategy sets tracking errors to 1%, 3% and 5%. Under different tracking error assumptions, the strategy maximizes ESG scores while achieving annualized excess returns of 3.61%, 3.40% and 3.43% respectively against the benchmark. The back-testing results of the strategy over the past five years show good performance, and excess returns were stable. This type of index enhancement strategy based on ESG factors could help drive an increase in the scale of ESG investing.

Building a richer ESG strategy portfolio to meet investors' diverse needs

Ping An's CN-ESG framework will expand to include fixed income ESG data and climate risk-related AI-driven factors. It will enable more diverse investment options, such as ESG fixed income indices and climate risk-focused indices, to meet investors' diverse needs. Ping An also developed a series of AI-ESG products focusing on corporate management, risk monitoring and analytics solutions for ESG and climate risk analysis, including portfolio sustainability footprint analysis, a portfolio adjustment tool, a sustainable funds screening tool, and climate risk asset pricing models to support ESG investment.

PADERC is a professional institution specializing in macroeconomics and policy research, using big data and artificial intelligence to provide insights on macroeconomic trends, including developments in ESG disclosures and ratings.

For the full report, click here.

About Ping An Group

Ping An Insurance (Group) Company of China, Ltd. ("Ping An") is a world-leading technology-powered retail financial services group. With over 210 million retail customers and 560 million Internet users, Ping An is one of the largest financial services companies in the world.

Ping An has two over-arching strategies, "pan financial assets" and "pan health care", which focus on the provision of financial and health care services through our integrated financial services platform and our five ecosystems of financial services, health care, auto services, real estate services and smart city services. Our "finance + technology" and "finance + ecosystem" strategies aim to provide customers and internet users with innovative and simple products and services using technology. As China's first joint stock insurance company, Ping An is committed to upholding the highest standards of corporate reporting and corporate governance. The Group is listed on the stock exchanges in Hong Kong and Shanghai.

In 2020, Ping An ranked 7th in the Forbes Global 2000 list and ranked 21st in the Fortune Global 500 list. Ping An also ranked 38th in the 2020 WPP Kantar Millward Brown BrandZTM Top 100 Most Valuable Global Brands list. For more information, please visit http://www.pingan.cn.

About Ping An Digital Economic Research Center

Ping An Digital Economic Research Center utilizes more than 50 TB high frequency data points, more than 30 years of historical data and more than 1.5 billion data points to drive research on the "AI + Macro Forecast" and provide insights and methods towards precise macroeconomic trends.

SOURCE Ping An Insurance (Group) Company of China, Ltd.

http://www.pingan.cn

More:

Ping An Uses Artificial Intelligence to Drive New ESG Investment Strategies - PRNewswire

Read More..

Patient Journey Analytics: Improving patient outreach and clinical outcomes through evidence-based solutions powered by innovative technology and…

LONDON--(BUSINESS WIRE)--Quantzig, one of the worlds leading analytics solutions provider, announces the addition of new services to its advanced portfolio of patient journey analytics solutions. Quantzigs real world evidence analytics capabilities cover various aspects of the patient journey, including- Patient Journey Mapping, Diagnostic Data Analytics, and solutions designed to enhance Patient Outreach programs.

Quantzigs patient journey analytics solutions aim to reduce the burden of normalizing, integrating, and analyzing data by drawing actionable insights that solve some of the unmet needs in healthcare. Contact an analytics expert for immediate, actionable solution insights.

With constant fluctuations in market dynamics, healthcare organizations are experiencing significant challenges. Most of these challenges revolve around managing patient engagement, enhancing patient experiences, and driving outcomes while reducing costs. At the same time, healthcare service providers face many challenges due to the influx of patient data, budget constraints, and limited resources. Due to these factors streamlining workflows, increasing efficiencies, and enhancing the quality of care have become extremely important to maintain positive patient experiences. Our comprehensive expertise and advanced real-world evidence analytics solutions support businesses across the healthcare continuum, including pharma and life sciences, medical device manufacturers, healthcare payers and providers, and other R&D organizations within healthcare.

Our scalable solutions are powered by sophisticated machine learning and natural language processing techniques that reduce the time associated with translating unstructured, fragmented, and disparate data sets into insights. Learn more.

How Can Quantzigs Patient Journey Analytics Solutions Help You?

Analyzing patient journeys in todays highly regulated business world is challenging. Quantzig offers holistic patient journey analytics solutions that act as a dominant force to drive patient outcomes and satisfaction rates. Take a look at the complete portfolio of patient journey analytics solutions here: https://bit.ly/2LFkvDQ

A unique mix of ML and NLP capabilities and technology expertise is what sets us apart, making us the most preferred patient journey analytics service provider globally. Our solutions also help businesses to drive impactful outcomes by-

Drawing on our expertise in the field of real world evidence analytics, we focus on integrating the best practices to drive continuous process improvements in healthcare. Schedule a FREE demo to get started.

Why Partner with Quantzig?

With a huge clientele, which ranges from CEOs to BU heads to stakeholders of Fortune 500 companies, we have played an active part in improving business outcomes globally. Our expertise and domain knowledge also reflect the number of projects weve worked on and the results that have prompted businesses to engage with us on an ongoing basis, making us the most preferred analytics partner for leading businesses.

With more than 16 years of experience and a dedicated team of 550+ analysts and data science professionals, we have a proven track record of helping healthcare organizations across North America, Europe, EMEA, and APAC leverage analytics to drive better outcomes. Request a FREE proposal to gain detailed insights into our engagement policies.

About Quantzig

Quantzig is the world's foremost full-service advanced analytics and business intelligence solution provider, turning clients' complex, unstructured data into intelligent, actionable insights that enable them to solve complex business problems and inspire innovation, change, and growth.

Over the past 16 years, our insights have helped over 120 clients spanning across industries and sectors like Pharmaceutical and Life Sciences, Retail and CPG, Food and Beverage, and more. We have successfully delivered 1500 in-depth solutions in areas like Marketing Analytics, Customer Analytics, Supply Chain Analytics, and more. For more information on our engagement policies and pricing plans, visit: https://www.quantzig.com/request-for-proposal

Read the original here:

Patient Journey Analytics: Improving patient outreach and clinical outcomes through evidence-based solutions powered by innovative technology and...

Read More..

Deep Learning Market Demand to Shoot with the Increasing Uses of Security & Surveillance Solutions Increasing Demand Forecast 2023 – NeighborWebSJ

Market Overview

The global deep learning market is growing at a rapid pace. Market growth attributes to the increasing adoption of cloud-based services and large scale generation of unstructured data. Besides, the increasing demand for deep learning technology for natural language processing and voice/speech recognition applications drive the growth of the market. Moreover, increasing applications of deep learning models for image/speech recognition, data mining, and language translations escalate market growth.

In its detailed analysis, Market Research Future (MRFR) asserts that the global deep learning market is poised to register a 30.87% CAGR throughout the forecast period (2017 -2023). Growing developments of humanoid robots, like Sophia, and the rising uses of augmented (AR) and virtual reality (VR) displays for the automotive and 3D gaming sectors boost the market growth. The uses of deep learning technologies in medical image analysis are rising exponentially.

Resultantly, the burgeoning healthcare sectors worldwide are cited as a major contributing factor for the market growth. Additionally, growing developments in deep learning technology and growth in the numbers of chatbots, alongside increasing R&D investments by companies, push the markets growth. The encouraging adoption of digital voice assistants increasing numbers of start-ups focused on augmented and virtual reality technologies influences the markets growth.

FOR MORE DETAILS:http://www.marketwatch.com/press-release/ticket-printers-production-market-2020-global-analysis-by-trends-size-share-growth-opportunities-demand-challenges-and-forecast-to-2023-2021-01-05

Global Market for Deep Learning Segmentation

The report is segmented into four dynamics;

By Component : Hardware (processor, memory, network, others), Software (solution, platform, others), and Services (installation, training, support & maintenance, others).

By Application : Image Recognition, Data Mining, Signal Recognition, and others.

By End-User : Manufacturing, Security, Retail, Media & Entertainment, Automotive, BFSI, Healthcare, Agriculture, and others.

By Regions : Americas, Europe, Asia Pacific, and the Rest-of-the-World.

Global Deep Learning Market Regional Analysis

North America dominates the global Deep Learning market. The largest market share attributes to the growing adoption of deep learning models for application image recognition, voice & image recognition, data mining, signal recognition, and diagnostics purposes. Besides, substantial investments transpired by key players in the development of AI technology platforms positively impact regional market growth.

Increasing R&D investments and rising business digital transformation push the growth of the market demand in the region. Moreover, the rising use of deep learning technologies for image recognition, data mining, and signal recognition substantiates the market growth. Manufacturing and automotive sectors in the US, Canada, and Mexico boost market growth.

The Asia Pacific region stands second in the global deep learning market. The market is driven by the growing development and penetration of deep learning technology in the region. Additionally, the proliferation of digitization and the spurring rise in the image recognition segment drive the regional markets growth. Rising foreign investments and implementing deep learning models for applications, such as voice, image, and signal recognition, favor the regional markets growth.

The European deep learning market is growing rapidly. Factors such as the augmenting uptake of deep learning technology by the European governments for surveillance, fraud detection, and data mining purposes, and healthcare diagnostics purposes boost the regional market growth. Furthermore, the presence of various notable players and large deployments across the industries in the region act as major tailwinds for the markets growth.

Global Deep Learning Market Competitive Analysis

Highly competitive, the deep learning market appears fragmented due to the presence of several well-established players. Mergers & acquisitions, innovation, and brand reinforcement remain prevailing key trends for the leading players as these strategies support their growth and expansion plans. They deliver reliable, leading-edge solutions and services. For which they invest substantially in R&D to develop adept technologies and products.

Major Players:

Players leading the deep learning market include Intel Corporation (USA), Amazon Inc. (USA), Samsung Electronics Co Ltd (South Korea), Sensory Inc. (USA), Micron Technology (USA), Xilinx Inc. (USA), Mellanox Technologies (USA), Google LLC (USA), Adapteva, Inc. (USA), NVIDIA Corporation (USA), Qualcomm Technologies Inc. (USA), Baidu Inc (China), Advanced Micro Devices Inc. (USA), IBM Corporation (USA), Facebook (USA), Microsoft Corporation (USA), and Tenstorrent (Canada), among others.

ALSO READ :https://www.marketwatch.com/press-release/pallet-racking-market-growth-size-demands-swot-analysis-potential-targets-proposal-stratergy-and-industry-outlook2021-2021-01-07?tesla=y

Industry/Innovation/Related News:

November 21, 2020 -Abacus.AI (the US), a leading AI research company, announced raising USD 22 MN in Series B funding. The company has raised over USD 40 MN since the day of its inception.

November 20, 2020 -A team of researchers at Binghamton University, State University of New York, announced using deep-learning techniques to analyze the numbers and suggest improved public safety by re-allocating resources. The team states that DeepER could be tweaked for other large cities or clusters of smaller cities with similar characteristics that would provide enough data to make predictions.

November 03, 2020 -A team of deep learning enthusiasts from IIT Roorkee (The Indian Institutes of Technology, in Roorkee) won the top prize at the recently-concluded tech4heritage hackathon. The team Ancient AI was able to develop restorative outcomes with the help of deep learning techniques to digitally restore damaged murals of Ajanta caves, utilizing a dataset of reference paintings to develop their AI model.

For More Reference:https://www.openpr.com/news/2205028/performance-analytics-market-covid-19-analysis-2016-receives

Company name: Market Research Future

About Market Research Future:

At Market Research Future (MRFR), we enable our customers to unravel the complexity of various industries through our Cooked Research Report (CRR), Half-Cooked Research Reports (HCRR), Raw Research Reports (3R), Continuous-Feed Research (CFR), and Market Research & Consulting Services.

MRFR team have supreme objective to provide the optimum quality market research and intelligence services to our clients. Our market research studies by Components, Application, Logistics and market players for global, regional, and country level market segments, enable our clients to see more, know more, and do more, which help to answer all their most important questions.

Contact:Market Research FutureOffice No. 528, Amanora ChambersMagarpatta Road, Hadapsar,Pune 411028Maharashtra, India+1 646 845 9312Email:[emailprotected]

Read more from the original source:

Deep Learning Market Demand to Shoot with the Increasing Uses of Security & Surveillance Solutions Increasing Demand Forecast 2023 - NeighborWebSJ

Read More..