Page 1,821«..1020..1,8201,8211,8221,823..1,8301,840..»

Machine learning at the edge: The AI chip company challenging Nvidia and Qualcomm – VentureBeat

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Todays demand for real-time data analytics at the edge marks the dawn of a new era in machine learning (ML): edge intelligence. That need for time-sensitive data is, in turn, fueling a massive AI chip market, as companies look to provide ML models at the edge that have less latency and more power efficiency.

Conventional edge ML platforms consume a lot of power, limiting the operational efficiency of smart devices, which live on the edge. Thosedevices are also hardware-centric, limiting their computational capability and making them incapable of handling varying AI workloads. They leverage power-inefficient GPU- or CPU-based architectures and are also not optimized for embedded edge applications that have latency requirements.

Even though industry behemoths like Nvidia and Qualcomm offer a wide range of solutions, they mostly use a combination of GPU- or data center-based architectures and scale them to the embedded edge as opposed to creating a purpose-built solution from scratch. Also, most of these solutions are set up for larger customers, making them extremely expensive for smaller companies.

In essence, the $1 trillion global embedded-edge market is reliant on legacy technology that limits the pace of innovation.

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

ML company Sima AI seeks to address these shortcomings with its machine learning-system-on-chip (MLSoC) platform that enables ML deployment and scaling at the edge. The California-based company, founded in 2018, announced today that it has begun shipping the MLSoC platform for customers, with an initial focus of helping solve computer vision challenges in smart vision, robotics, Industry 4.0, drones, autonomous vehicles, healthcare and the government sector.

The platform uses a software-hardware codesign approach that emphasizes software capabilities to create edge-ML solutions that consume minimal power and can handle varying ML workloads.

Built on 16nm technology, the MLSoCs processing system consists of computer vision processors for image pre- and post-processing, coupled with dedicated ML acceleration and high-performance application processors. Surrounding the real-time intelligent video processing are memory interfaces, communication interfaces, and system management all connected via a network-on-chip (NoC). The MLSoC features low operating power and high ML processing capacity, making it ideal as a standalone edge-based system controller, or to add an ML-offload accelerator for processors, ASICs and other devices.

The software-first approach includes carefully-defined intermediate representations (including the TVM Relay IR), along with novel compiler-optimization techniques. This software architecture enables Sima AI to support a wide range of frameworks (e.g., TensorFlow, PyTorch, ONNX, etc.) and compile over 120+ networks.

Many ML startups are focused on building only pure ML accelerators and not an SoC that has a computer-vision processor, applications processors, CODECs, and external memory interfaces that enable the MLSoC to be used as a stand-alone solution not needing to connect to a host processor. Other solutions usually lack network flexibility, performance per watt, and push-button efficiency all of which are required to make ML effortless for the embedded edge.

Sima AIs MLSoC platform differs from other existing solutions as it solves all these areas at the same time with its software-first approach.

The MLSoC platform is flexible enough to address any computer vision application, using any framework, model, network, and sensor with any resolution. Our ML compiler leverages the open-source Tensor Virtual Machine (TVM) framework as the front-end, and thus supports the industrys widest range of ML models and ML frameworks for computer vision, Krishna Rangasayee, CEO and founder of Sima AI, told VentureBeat in an email interview.

From a performance point of view, Sima AIs MLSoC platform claims to deliver 10x better performance in key figures of merit such as FPS/W and latency than alternatives.

The companys hardware architecture optimizes data movement and maximizes hardware performance by precisely scheduling all computation and data movement ahead of time, including internal and external memory to minimize wait times.

Sima AI offers APIs to generate highly optimized MLSoC code blocks that are automatically scheduled on the heterogeneous compute subsystems. The company has created a suite of specialized and generalized optimization and scheduling algorithms for the back-end compiler that automatically convert the ML network into highly optimized assembly codes that run on the machine learning-accelerator (MLA) block.

For Rangasayee, the next phase of Sima AIs growth is focused on revenue and scaling their engineering and business teams globally. As things stand, Sima AI has raised $150 million in funding from top-tier VCs such as Fidelity and Dell Technologies Capital. With the goal of transforming the embedded-edge market, the company has also announced partnerships with key industry players like TSMC, Synopsys, Arm, Allegro, GUC and Arteris.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Read the original:
Machine learning at the edge: The AI chip company challenging Nvidia and Qualcomm - VentureBeat

Read More..

The Democratization of Machine Learning: Heres how Applied Machine Learning Days (AMLD) Africa impacted (almost) the whole continent – African…

AMLD Africa (https://bit.ly/3AvIECg), a free Machine Learning conference, makes it possible for anyone in Africa to learn about AI with world-class speakers and entities.

15.7 trillion US$. This is the expected contribution of Artificial Intelligence to the global economy by 2030 according to PwCs report Sizing the Price (https://pwc.to/3R8JFY4). Heres how Applied Machine Learning Days (AMLD) Africa is offering free access to high-quality education around AI to democratize AI in Africa, and ultimately, contribute to making sure Africa has a fair share of the Price.

The more African can learn, grasp, and be inspired by AI, the more existing projects (or new ones) will leverage data to create a social, economic, and cultural value in local environments. It is with the same ambitious vision that AMLD Africas second edition will present AI through an African Lens from the 3rd to the 5th of November at the prestigious Mohammed VI Polytechnic University (UM6P) in Ben Guerir, Morocco.

AMLD Africa is a 3-day conference that consists of both inspiring talks and instructive workshops. Speakers will have an opportunity to inspire African talents, teach those who would like to improve their technical skills and strengthen the African Data Science community such as Zindi. Consequently, AMLD Africas conference will embody its motto: Democratizing Machine Learning in Africa.

For their first edition, AMLD Africa was able to present a truly comprehensive platform that included academics from Stanford, the University of the Western Cape and EPFL, corporates from IBM, Google, entrepreneurs and even the Assistant Director of the UNESCO. 3000 participants from all over the continent (50 African countries) were able to contemplate a true picture of AI, a picture where every entity was able to add its colour to make the whole as accurate as possible. If you too are a motivated and passionate AI enthusiast, you can embellish this years painting by clickinghere(https://bit.ly/3R60MKl) to apply for a talk or a workshop.

When asked on how to retain talent in the african continent: It is a matter of identifying important problems, and having the opportunity and tools to solve them. Furthermore, the increase of start-ups creates a dynamic and go-ahead environment for machine learning engineers and researches. If we manage to create such an ecosystem, we will be able to bring change, through solving long lasting problems. A future of Shared AI Knowledge, Opening Keynote, Moustapha Cisse Head of the Google AI Center in Accra, Ghana.

AMLD Africa not only includes both entities and individuals, but also has a cross-industry approach. Since AI is impacting every sector finance, national security, healthcare, etc. of private and public life, AMLD Africa presents the talks within tracks: Healthcare, agriculture and even entertainment for example. Whether it is Detecting cervical cancer with a smartphone-based solution, Measuring and optimizing agricultural production using aerial imagery or even music generated by AI (that you can listen to in the video below), the talks tackle actual challenges and conveyed the idea that technology is not a goal itself, but rather a tool for the minds and sometimes the ears.

https://bit.ly/3CKT4AQ

AI has a lot of ethical, social, and economic challenges, but these challenges come with an undeniable opportunity to leapfrog the technological infrastructure associated with the Third Industrial Revolution. It is only when we give the young, motivated and passionate talents easy access to technology, that more initiatives will leverage data, and that AI could benefit daily life. If you are wondering how that can do, then AMLD Africa is the conference to attend.

Distributed by APO Group on behalf of AMLD Africa.

Contact:Mohamed Ali Dhraief[emailprotected]

This Press Release has been issued by APO. The content is not monitored by the editorial team of African Business and not of the content has been checked or validated by our editorial teams, proof readers or fact checkers. The issuer is solely responsible for the content of this announcement.

Here is the original post:
The Democratization of Machine Learning: Heres how Applied Machine Learning Days (AMLD) Africa impacted (almost) the whole continent - African...

Read More..

GBT is Implementing Machine Learning Driven, Pattern Matching Technology for its Epsilon, Mi-crochip Reliability Verification and Correction EDA Tool…

GBT Technologies Inc.

SAN DIEGO, Sept. 01, 2022 (GLOBE NEWSWIRE) -- GBT Technologies Inc. (OTC PINK:GTCH) ("GBT or the Company), is implementing a machine learning driven, pattern matching technology within its Epsilon, microchips reliability verification and correction Electronic Design Automation (EDA) tool. Design rules are getting increasingly complex with each new process node and design firms are facing new challenges in the physical verification domain. One of the major areas that are affected by the process physics, is reliability Verification (RV). Microchips are major components nearly in every major electronics application. Civil, military and space exploration industries require reliable operations for many years, and in severe environments. High performance computing systems require advanced processing with high reliability to ensure the consistency and accuracy of the processed data. Complex integrated circuits are in the heart of these systems and need to function with high level of dependability. Particularly in the fields of medicine, aviation, transportation, data storage and industrial instrumentation, microchips reliability factor is crucial. GBT is implementing new machine learning driven, pattern matching techniques within its Epsilon system with the goal of addressing the advanced semiconductors physics, ensuring high level of reliability, optimal power consumption and high performance. As Epsilon analyzes the layout of an integrated circuit (IC), it identifies reliability weak spots, which are specific regions of an ICs layout, and learns their patterns. As the tool continues analyzing the layout it records problematic zones taking into account the patterns orientations and placements. In addition, it is designed to understand small variations in dimensions of the pattern, as specified by the designer or an automatic synthesis tool. As the weak spots are identified, the tool will take appropriate action to modify and correct them. A deep learning mechanism will be performing the data analysis, identification, categorization, and reasoning while executing an automatic correction. The Machine Learning will understand the patterns and record them in an internal library for future use. Epsilons pattern matching technology will be analyzing the chips data according to a set of predefined and learned-from-experience rules. Its cognitive capabilities will make it self-adjust to newest nodes with new constraints and challenges, with the goal of providing quick and reliable verification and correction of an IC layout.

Story continues

The Company released a video which explain the potential functions of the Epsilon tool: https://youtu.be/Mz4IOGRHeqw

"The ability to analyze and address advanced ICs reliability parameters is necessary to mitigate risk of system degradation, overheating, and possible malfunction. It can affect microchips performance, power consumption, data storage and retrieval, heat and an early failure which may be critical in vital electronic systems. Epsilon analyzes a microchip data for reliability, power and electrothermal characteristics, and performs auto-correction in case violations found. We are now implementing an intelligent technology for Epsilon with the goal of utilizing pattern matching algorithms to formulate a smart detection of reliability issues within integrated circuits layout. The new techniques will analyze and learn weak spots within microchips data, predicting failure models that are based on the process physics and electrical constraints knowledge. It will take into consideration each devices function, connectivity attributes, electrical currents information, electrothermal factors and more to determine problematic spots and perform auto-correction. Particularly for FinFet and GAA FET (Gate All Around FET) technologies, a devices functionality is developed with major reliability considerations ensuring power management efficiency, optimal thermal analysis aiming for long, reliable life span. Using smart pattern matching methods, we plan to improve reliability analysis, achieving consistency and accuracy across designs within advanced manufacturing processes. As dimensions of processes shrink, ICs layout features become much more complex to analyze for electrical phenomenon. To provide an intelligent answer for these complexities, we are implementing deep learning-based pattern matching technology with the goal of ensuring efficient, green microchips power consumption, higher performance, optimized thermal distribution, and ultimately superior reliability stated Danny Rittman, the Companys CTO.

There is no guarantee that the Company will be successful in researching, developing or implementing this system. In order to successfully implement this concept, the Company will need to raise adequate capital to support its research and, if successfully researched and fully developed, the Company would need to enter into a strategic relationship with a third party that has experience in manufacturing, selling and distributing this product. There is no guarantee that the Company will be successful in any or all of these critical steps.

About Us

GBT Technologies, Inc. (OTC PINK: GTCH) (GBT) (http://gbtti.com) is a development stage company which considers itself a native of Internet of Things (IoT), Artificial Intelligence (AI) and Enabled Mobile Technology Platforms used to increase IC performance. GBT has assembled a team with extensive technology expertise and is building an intellectual property portfolio consisting of many patents. GBTs mission, to license the technology and IP to synergetic partners in the areas of hardware and software. Once commercialized, it is GBTs goal to have a suite of products including smart microchips, AI, encryption, Blockchain, IC design, mobile security applications, database management protocols, with tracking and supporting cloud software (without the need for GPS). GBT envisions this system as a creation of a global mesh network using advanced nodes and super performing new generation IC technology. The core of the system will be its advanced microchip technology; technology that can be installed in any mobile or fixed device worldwide. GBTs vision is to produce this system as a low cost, secure, private-mesh-network between all enabled devices. Thus, providing shared processing, advanced mobile database management and sharing while using these enhanced mobile features as an alternative to traditional carrier services.

Forward-Looking Statements

Certain statements contained in this press release may constitute "forward-looking statements". Forward-looking statements provide current expectations of future events based on certain assumptions and include any statement that does not directly relate to any historical or current fact. Actual results may differ materially from those indicated by such forward-looking statements because of various important factors as disclosed in our filings with the Securities and Exchange Commission located at their website (http://www.sec.gov). In addition to these factors, actual future performance, outcomes, and results may differ materially because of more general factors including (without limitation) general industry and market conditions and growth rates, economic conditions, governmental and public policy changes, the Companys ability to raise capital on acceptable terms, if at all, the Companys successful development of its products and the integration into its existing products and the commercial acceptance of the Companys products. The forward-looking statements included in this press release represent the Company's views as of the date of this press release and these views could change. However, while the Company may elect to update these forward-looking statements at some point in the future, the Company specifically disclaims any obligation to do so. These forward-looking statements should not be relied upon as representing the Company's views as of any date subsequent to the date of the press release.

Contact:

Dr. Danny Rittman, CTO press@gopherprotocol.com

View post:
GBT is Implementing Machine Learning Driven, Pattern Matching Technology for its Epsilon, Mi-crochip Reliability Verification and Correction EDA Tool...

Read More..

What’s the Difference Between Vertical Farming and Machine Learning? – Electronic Design

What youll learn

Sometimes inspiration comes in the oddest ways. I like to watch CBS News Sunday Morning because of the variety of stories they air. Recently, they did one on Vertical Farming - A New Form of Agriculture (see video below).

CBS News Sunday Morning recently did a piece on vertical farming that spawned this article.

For those who didnt watch the video, vertical farming is essentially a method of indoor farming using hydroponics. Hydroponics isnt new; its a subset of hydroculture where crops are grown without soil. Instead, the plants grow in a mineral-enriched water. This can be done in conjunction with sunlight but typically an artificial light source is used.

The approach is useful in areas that dont provide enough light, or at times or in locations where the temperature or conditions outside would not be conducive for growing plants.

Vertical farming is hydroponics taken to the extreme, with stacks upon stacks of trays with plants under an array of lights. These days, the lights typically are LEDs because of their efficiency and the ability to generate the type of light most useful for plant growth. Automation can be used to streamline planting, support, and harvesting.

A building can house a vertical farm anywhere in the world, including in the middle of a city. Though lots of water is required, its recycled, making it more efficient than other forms of agriculture.

Like many technologies, the opportunities are great if you ignore the details. Thats where my usual contrary nature came into play, though, since I followed up my initial interest by looking for limitations or problems related to vertical farming. Of course, I found quite a few and then noticed that many of the general issues applied to another topic I cover a lotmachine learning/artificial intelligence (ML/AI).

If you made it this far, you know how Im looking at the difference between machine learning and vertical farming. They obviously have no relationship in terms of their technology and implementation, but they do have much in common when one looks at the potential problems and solutions related to those technologies.

As electronic system designers and developers, we constantly deal with potential solutions and their tradeoffs. Machine learning is one of those generic categories that has proven useful in many instances. However, one must be wary of the issues underlying those flashy approaches.

Vertical farming, like machine learning, is something one can dabble in. To be successful, though, it helps to have an expert or at least someone who can quickly gain that experience. This tends to be the case with new and renewed technologies in general. I suspect significantly more ML experts are available these days for a number of reasons like the cost of hardware, but the demand remains high.

Vertical farming uses a good bit of computer automation. The choice of plants, fertilizers, and other aspects of hydropic farming are critical to the success of the farm. Then theres the maintenance aspect. ML-based solutions are one way of reducing the expertise or time required by the staff to support the system.

ML programmers and developers also are able to obtain easier-to-use tools, thereby reducing the amount of expertise and training required to take advantage of ML solutions. These tools often incorporate their own ML models, which are different than those being generated.

Hydroponics works well for many plants, but unfortunately for multiple others, thats not the case. For example, crops like microgreens work well. However, a cherry or apple tree often struggles with this treatment.

ML suffers from the same problem in that its not applicable to all computational chores. But, unlike vertical farms, ML applications and solutions are more diverse. The challenge for developers comes down to understanding where ML is and isnt applicable. Trying to force-fit a machine-learning model to handle a particular problem can result in a solution that provides poor results at high cost.

Vertical farms require power for lighting and to move liquid. ML applications tend to do lots of computation and thus require a good deal of power compared to other computational requirements. One big difference between the two is that ML solutions are scalable and hardware tradeoffs can be significant.

For example, ML hardware can improve performance thats orders of magnitude better than software solutions while reducing power requirements. Likewise, even software-only solutions may be efficient enough to do useful work even while using little power, simply because developers have made the ML models work within the limitations of their design. Vertical farms do not have this flexibility.

Large vertical farms do require a major investment, and theyre not cheap to run due to their scale. The same is true for cloud-based ML solutions utilizing the latest in disaggregated cloud-computing centers. Such data centers are leveraging technologies like SmartNIC and smart storage to use ML models closer to communication and storage than was possible in the past.

The big difference with vertical farming versus ML is scalability. Its now practical for multiple ML models to be running in a smartwatch with a dozen sensors. But that doesnt compare to dealing with agriculture that must scale with the rest of the physical world requirements, such as the plants themselves.

Still, these days, ML does require a significant investment with respect to development and developing the experience to adequately apply ML. Software and hardware vendors have been working to lower both the startup and long-term development costs, which has been further augmented by the plethora of free software tools and low-cost hardware thats now generally available.

Cut the power on a vertical farm and things come to a grinding halt rather quickly, although its not like having an airplane lose power at 10,000 feet. Still, plants do need sustenance and light, though theyre accustomed to changes over time. Nonetheless, responding to failures within the system is important to the systems long-term usefulness.

ML applications tend to require electricity to run, but that tends to be true of the entire system. A more subtle problem with ML applications is the source of input, which is typically sensors such as cameras, temperature sensors, etc. Determining whether the input data is accurate can be challenging; in many cases, designers simply assume that this information is accurate. Applications such as self-driving cars often use redundant and alternative inputs to provide a more robust set of inputs.

Vertical-farming technology continues to change and become more refined, but its still maturing. The same is true for machine learning, though the comparison is like something between a penny bank and Fort Knox. There are simply more ML solutions, many of which are very mature with millions of practical applications.

That said, ML technologies and applications are so varied, and the rate of change so large, that keeping up with whats availablelet alone how things work in detailcan be overwhelming.

Vertical farming is benefiting from advances in technology from robotics to sensors to ML. The ability to track plant growth, germination, and detecting pests are just a few tasks that apply across all of agriculture, including vertical farming.

As with many Whats the Difference articles, the comparisons are not necessarily one-to-one, but hopefully you picked up something about ML or vertical farms that was of interest. Many issues dont map well, like problems of pollination for vertical farms. Though the output of vertical farms will likely feed some ML developers, ML is likely to play a more important part in vertical farming given the level of automation possible with sensors, robots, and ML monitoring now available.

The rest is here:
What's the Difference Between Vertical Farming and Machine Learning? - Electronic Design

Read More..

Solve the problem of unstructured data with machine learning – VentureBeat

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Were in the midst of a data revolution. The volume of digital data created within the next five years will total twice the amount produced so far and unstructured data will define this new era of digital experiences.

Unstructured data information that doesnt follow conventional models or fit into structured database formats represents more than 80% of all new enterprise data. To prepare for this shift, companies are finding innovative ways to manage, analyze and maximize the use of data in everything from business analytics to artificial intelligence (AI). But decision-makers are also running into an age-old problem: How do you maintain and improve the quality of massive, unwieldy datasets?

With machine learning (ML), thats how. Advancements in ML technology now enable organizations to efficiently process unstructured data and improve quality assurance efforts. With a data revolution happening all around us, where does your company fall? Are you saddled with valuable, yet unmanageable datasets or are you using data to propel your business into the future?

Theres no disputing the value of accurate, timely and consistent data for modern enterprises its as vital as cloud computing and digital apps. Despite this reality, however, poor data quality still costs companies an average of $13 million annually.

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

To navigate data issues, you may apply statistical methods to measure data shapes, which enables your data teams to track variability, weed out outliers, and reel in data drift. Statistics-based controls remain valuable to judge data quality and determine how and when you should turn to datasets before making critical decisions. While effective, this statistical approach is typically reserved for structured datasets, which lend themselves to objective, quantitative measurements.

But what about data that doesnt fit neatly into Microsoft Excel or Google Sheets, including:

When these types of unstructured data are at play, its easy for incomplete or inaccurate information to slip into models. When errors go unnoticed, data issues accumulate and wreak havoc on everything from quarterly reports to forecasting projections. A simple copy and paste approach from structured data to unstructured data isnt enough and can actually make matters much worse for your business.

The common adage, garbage in, garbage out, is highly applicable in unstructured datasets. Maybe its time to trash your current data approach.

When considering solutions for unstructured data, ML should be at the top of your list. Thats because ML can analyze massive datasets and quickly find patterns among the clutter and with the right training, ML models can learn to interpret, organize and classify unstructured data types in any number of forms.

For example, an ML model can learn to recommend rules for data profiling, cleansing and standardization making efforts more efficient and precise in industries like healthcare and insurance. Likewise, ML programs can identify and classify text data by topic or sentiment in unstructured feeds, such as those on social media or within email records.

As you improve your data quality efforts through ML, keep in mind a few key dos and donts:

Your unstructured data is a treasure trove for new opportunities and insights. Yet only 18% of organizations currently take advantage of their unstructured data and data quality is one of the top factors holding more businesses back.

As unstructured data becomes more prevalent and more pertinent to everyday business decisions and operations, ML-based quality controls provide much-needed assurance that your data is relevant, accurate, and useful. And when you arent hung up on data quality, you can focus on using data to drive your business forward.

Just think about the possibilities that arise when you get your data under control or better yet, let ML take care of the work for you.

Edgar Honing is senior solutions architect at AHEAD.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

More here:
Solve the problem of unstructured data with machine learning - VentureBeat

Read More..

All You Need to Know About Support Vector Machines – Spiceworks News and Insights

A support vector machine (SVM) is defined as a machine learning algorithm that uses supervised learning models to solve complex classification, regression, and outlier detection problems by performing optimal data transformations that determine boundaries between data points based on predefined classes, labels, or outputs. This article explains the fundamentals of SVMs, their working, types, and a few real-world examples.

A support vector machine (SVM) is a machine learning algorithm that uses supervised learning models to solve complex classification, regression, and outlier detection problems by performing optimal data transformations that determine boundaries between data points based on predefined classes, labels, or outputs. SVMs are widely adopted across disciplines such as healthcare, natural language processing, signal processing applications, and speech & image recognition fields.

Technically, the primary objective of the SVM algorithm is to identify a hyperplane that distinguishably segregates the data points of different classes. The hyperplane is localized in such a manner that the largest margin separates the classes under consideration.

The support vector representation is shown in the figure below:

As seen in the above figure, the margin refers to the maximum width of the slice that runs parallel to the hyperplane without any internal support vectors. Such hyperplanes are easier to define for linearly separable problems; however, for real-life problems or scenarios, the SVM algorithm tries to maximize the margin between the support vectors, thereby giving rise to incorrect classifications for smaller sections of data points.

SVMs are potentially designed for binary classification problems. However, with the rise in computationally intensive multiclass problems, several binary classifiers are constructed and combined to formulate SVMs that can implement such multiclass classifications through binary means.

In the mathematical context, an SVM refers to a set of ML algorithms that use kernel methods to transform data features by employing kernel functions. Kernel functions rely on the process of mapping complex datasets to higher dimensions in a manner that makes data point separation easier. The function simplifies the data boundaries for non-linear problems by adding higher dimensions to map complex data points.

While introducing additional dimensions, the data is not entirely transformed as it can act as a computationally taxing process. This technique is usually referred to as the kernel trick, wherein data transformation into higher dimensions is achieved efficiently and inexpensively.

The idea behind the SVM algorithm was first captured in 1963 by Vladimir N. Vapnik and Alexey Ya. Chervonenkis. Since then, SVMs have gained enough popularity as they have continued to have wide-scale implications across several areas, including the protein sorting process, text categorization, facial recognition, autonomous cars, robotic systems, and so on.

See More: What Is a Neural Network? Definition, Working, Types, and Applications in 2022

The working of a support vector machine can be better understood through an example. Lets assume we have red and black labels with the features denoted by x and y. We intend to have a classifier for these tags that classifies data into either the red or black category.

Lets plot the labeled data on an x-y plane, as below:

A typical SVM separates these data points into red and black tags using the hyperplane, which is a two-dimensional line in this case. The hyperplane denotes the decision boundary line, wherein data points fall under the red or black category.

A hyperplane is defined as a line that tends to widen the margins between the two closest tags or labels (red and black). The distance of the hyperplane to the most immediate label is the largest, making the data classification easier.

The above scenario is applicable for linearly separable data. However, for non-linear data, a simple straight line cannot separate the distinct data points.

Heres an example of non-linear complex dataset data:

The above dataset reveals that a single hyperplane is not sufficient to separate the involved labels or tags. However, here, the vectors are visibly distinct, making segregating them easier.

For data classification, you need to add another dimension to the feature space. For linear data discussed until this point, two dimensions of x and y were sufficient. In this case, we add a z-dimension to better classify the data points. Moreover, for convenience, lets use the equation for a circle, z = x + y.

With the third dimension, the slice of feature space along the z-direction looks like this:

Now, with three dimensions, the hyperplane, in this case, runs parallel to the x-direction at a particular value of z; lets consider it as z=1.

The remaining data points are further mapped back to two dimensions.

The above figure reveals the boundary for data points along features x, y, and z along a circle of the circumference with radii of 1 unit that segregates two labels of tags via the SVM.

Lets consider another method of visualizing data points in three dimensions for separating two tags (two different colored tennis balls in this case). Consider the balls lying on a 2D plane surface. Now, if we lift the surface upward, all the tennis balls are distributed in the air. The two differently colored balls may separate in the air at one point in this process. While this occurs, you can use or place the surface between two segregated sets of balls.

In this entire process, the act of lifting the 2D surface refers to the event of mapping data into higher dimensions, which is technically referred to as kernelling, as mentioned earlier. In this way, complex data points can be separated with the help of more dimensions. The concept highlighted here is that the data points continue to get mapped into higher dimensions until a hyperplane is identified that shows a clear separation between the data points.

The figure below gives the 3D visualization of the above use case:

See More: Narrow AI vs. General AI vs. Super AI: Key Comparisons

Support vector machines are broadly classified into two types: simple or linear SVM and kernel or non-linear SVM.

A linear SVM refers to the SVM type used for classifying linearly separable data. This implies that when a dataset can be segregated into categories or classes with the help of a single straight line, it is termed a linear SVM, and the data is referred to as linearly distinct or separable. Moreover, the classifier that classifies such data is termed a linear SVM classifier.

A simple SVM is typically used to address classification and regression analysis problems.

Non-linear data that cannot be segregated into distinct categories with the help of a straight line is classified using a kernel or non-linear SVM. Here, the classifier is referred to as a non-linear classifier. The classification can be performed with a non-linear data type by adding features into higher dimensions rather than relying on 2D space. Here, the newly added features fit a hyperplane that helps easily separate classes or categories.

Kernel SVMs are typically used to handle optimization problems that have multiple variables.

See More: What is Sentiment Analysis? Definition, Tools, and Applications

SVMs rely on supervised learning methods to classify unknown data into known categories. These find applications in diverse fields.

Here, well look at some of the top real-world examples of SVMs:

The geo-sounding problem is one of the widespread use cases for SVMs, wherein the process is employed to track the planets layered structure. This entails solving the inversion problems where the observations or results of the issues are used to factor in the variables or parameters that produced them.

In the process, linear function and support vector algorithmic models separate the electromagnetic data. Moreover, linear programming practices are employed while developing the supervised models in this case. As the problem size is considerably small, the dimension size is inevitably tiny, which accounts for mapping the planets structure.

Soil liquefaction is a significant concern when events such as earthquakes occur. Assessing its potential is crucial while designing any civil infrastructure. SVMs play a key role in determining the occurrence and non-occurrence of such liquefaction aspects. Technically, SVMs handle two tests: SPT (Standard Penetration Test) and CPT (Cone Penetration Test), which use field data to adjudicate the seismic status.

Moreover, SVMs are used to develop models that involve multiple variables, such as soil factors and liquefaction parameters, to determine the ground surface strength. It is believed that SVMs achieve an accuracy of close to 96-97% for such applications.

Protein remote homology is a field of computational biology where proteins are categorized into structural and functional parameters depending on the sequence of amino acids when sequence identification is seemingly difficult. SVMs play a key role in remote homology, with kernel functions determining the commonalities between protein sequences.

Thus, SVMs play a defining role in computational biology.

SVMs are known to solve complex mathematical problems. However, smooth SVMs are preferred for data classification purposes, wherein smoothing techniques that reduce the data outliers and make the pattern identifiable are used.

Thus, for optimization problems, smooth SVMs use algorithms such as the Newton-Armijo algorithm to handle larger datasets that conventional SVMs cannot. Smooth SVM types typically explore math properties such as strong convexity for more straightforward data classification, even with non-linear data.

SVMs classify facial structures vs. non-facial ones. The training data uses two classes of face entity (denoted by +1) and non-face entity (denoted as -1) and n*n pixels to distinguish between face and non-face structures. Further, each pixel is analyzed, and the features from each one are extracted that denote face and non-face characters. Finally, the process creates a square decision boundary around facial structures based on pixel intensity and classifies the resultant images.

Moreover, SVMs are also used for facial expression classification, which includes expressions denoted as happy, sad, angry, surprised, and so on.

In the current scenario, SVMs are used for the classification of images of surfaces. Implying that the images clicked of surfaces can be fed into SVMs to determine the texture of surfaces in those images and classify them as smooth or gritty surfaces.

Text categorization refers to classifying data into predefined categories. For example, news articles contain politics, business, the stock market, or sports. Similarly, one can segregate emails into spam, non-spam, junk, and others.

Technically, each article or document is assigned a score, which is then compared to a predefined threshold value. The article is classified into its respective category depending on the evaluated score.

For handwriting recognition examples, the dataset containing passages that different individuals write is supplied to SVMs. Typically, SVM classifiers are trained with sample data initially and are later used to classify handwriting based on score values. Subsequently, SVMs are also used to segregate writings by humans and computers.

In speech recognition examples, words from speeches are individually picked and separated. Further, for each word, certain features and characteristics are extracted. Feature extraction techniques include Mel Frequency Cepstral Coefficients (MFCC), Linear Prediction Coefficients (LPC), Linear Prediction Cepstral Coefficients (LPCC), and others.

These methods collect audio data, feed it to SVMs and then train the models for speech recognition.

With SVMs, you can determine whether any digital image is tampered with, contaminated, or pure. Such examples are helpful when handling security-related matters for organizations or government agencies, as it is easier to encrypt and embed data as a watermark in high-resolution images.

Such images contain more pixels; hence, it can be challenging to spot hidden or watermarked messages. However, one solution is to separate each pixel and store data in different datasets that SVMs can later analyze.

Medical professionals, researchers, and scientists worldwide have been toiling hard to find a solution that can effectively detect cancer in its early stages. Today, several AI and ML tools are being deployed for the same. For example, in January 2020, Google developed an AI tool that helps in early breast cancer detection and reduces false positives and negatives.

In such examples, SVMs can be employed, wherein cancerous images can be supplied as input. SVM algorithms can analyze them, train the models, and eventually categorize the images that reveal malign or benign cancer features.

See More: What Is a Decision Tree? Algorithms, Template, Examples, and Best Practices

SVMs are crucial while developing applications that involve the implementation of predictive models. SVMs are easy to comprehend and deploy. They offer a sophisticated machine learning algorithm to process linear and non-linear data through kernels.

SVMs find applications in every domain and real-life scenarios where data is handled by adding higher dimensional spaces. This entails considering factors such as the tuning hyper-parameters, selecting the kernel for execution, and investing time and resources in the training phase, which help develop the supervised learning models.

Did this article help you understand the concept of support vector machines? Comment below or let us know on Facebook, Twitter, or LinkedIn. Wed love to hear from you!

Here is the original post:
All You Need to Know About Support Vector Machines - Spiceworks News and Insights

Read More..

Conservative media is lying to you. Ann Coulter encourages GOP split from Trump – The Boston Globe

Conservative media commentator Ann Coulter, once a fan of former Republican president Donald Trump and now a harsh critic, took another swipe at him this week.

In a podcast, she said, You dont need to suck up to Trump any more, conservative talk radio hosts, talk TV hosts, Republicans running for office. Hes done. Hes over.

In a tweet linking to the podcast, she also told followers, Conservative media is lying to you about Trumps popularity.

Coulter made similar points in a January column in which she declared, No one wants Trump. Hes fading faster than Sarah Palin did and she was second place on a losing presidential ticket.

She has also called Trump abjectly stupid and a liar and con man, Newsweek reports.

Martin Finucane can be reached at martin.finucane@globe.com.

Read the original:
Conservative media is lying to you. Ann Coulter encourages GOP split from Trump - The Boston Globe

Read More..

What To Know About Cryptocurrency and Scams | Consumer Advice

Confused about cryptocurrencies, like bitcoin or Ether (associated with Ethereum)? Youre not alone. Before you use or invest in cryptocurrency, know what makes it different from cash and other payment methods, and how to spot cryptocurrency scams or detect cryptocurrency accounts that may be compromised.

Cryptocurrency is a type of digital currency that generally exists only electronically. You usually use your phone, computer, or a cryptocurrency ATM to buy cryptocurrency. Bitcoin and Ether are well-known cryptocurrencies, but there are many different cryptocurrencies, and new ones keep being created.

People use cryptocurrency for many reasons quick payments, to avoid transaction fees that traditional banks charge, or because it offers some anonymity. Others hold cryptocurrency as an investment, hoping the value goes up.

You can buy cryptocurrency through an exchange, an app, a website, or a cryptocurrency ATM. Some people earn cryptocurrency through a complex process called mining, which requires advanced computer equipment to solve highly complicated math puzzles.

Cryptocurrency is stored in a digital wallet, which can be online, on your computer, or on an external hard drive. A digital wallet has a wallet address, which is usually a long string of numbers and letters. If something happens to your wallet or your cryptocurrency funds like your online exchange platform goes out of business, you send cryptocurrency to the wrong person, you lose the password to your digital wallet, or your digital wallet is stolen or compromised youre likely to find that no one can step in to help you recover your funds.

Because cryptocurrency exists only online, there are important differences between cryptocurrency and traditional currency, like U.S. dollars.

There are many ways that paying with cryptocurrency is different from paying with a credit card or other traditional payment methods.

Scammers are always finding new ways to steal your money using cryptocurrency. To steer clear of a crypto con, here are some things to know.

Spot crypto-related scamsScammers are using some tried and true scam tactics only now theyre demanding payment in cryptocurrency. Investment scams are one of the top ways scammers trick you into buying cryptocurrency and sending it on to scammers. But scammers are also impersonating businesses, government agencies, and a love interest, among other tactics.

Investment scamsInvestment scams often promise you can "make lots of money" with "zero risk," and often start on social media or online dating apps or sites. These scams can, of course, start with an unexpected text, email, or call, too. And, with investment scams, crypto is central in two ways: it can be both the investment and the payment.

Here are some common investment scams, and how to spot them.

Before you invest in crypto, search online for the name of the company or person and the cryptocurrency name, plus words like review, scam, or complaint. See what others are saying. And read more about other common investment scams.

Business, government, and job impersonators

In a business, government, or job impersonator scam, the scammer pretends to be someone you trust to convince you to send them money by buying and sending cryptocurrency.

To avoid business, government, and job impersonators, know that

Blackmail scamsScammers might send emails or U.S. mail to your home saying they have embarrassing or compromising photos, videos, or personal information about you. Then, they threaten to make it public unless you pay them in cryptocurrency. Dont do it. This is blackmail and a criminal extortion attempt. Report it to the FBI immediately.

Report fraud and other suspicious activity involving cryptocurrency to

Continued here:
What To Know About Cryptocurrency and Scams | Consumer Advice

Read More..

Cryptocurrency ATMs are popping up throughout California – NewsNation Now

David Lazarus, Nexstar Media Wire

(KTLA) When you think of buying or selling cryptocurrency, a high-tech trading floor might come to mind. Maybe you think of state-of-the-art apps.

What you probably dont think of is an ATM in a gas-station convenience store or payday-loan shop.

Increasingly, however, thats how many working-class people are encountering crypto. And consumer advocates say this may not be a good thing.

These ATMs are being put in places where retail consumers who dont have a lot of information about investing, but are excited about cryptocurrency and want to get involved, are most likely to find them, said Mark Hays, senior policy analyst for the advocacy group Americans for Financial Reform.

There are roughly 2,000 crypto ATMs in Los Angeles, mostly dealing in Bitcoin. They allow people to exchange dollars for digital currency for a fee of about 15%.

This is something weird, Anas Elshahawy, a cashier at a Crenshaw District convenience store, told NewsNation affiliate KTLA.

He said about a half-dozen people use his shops Bitcoin machine each week.

Experts warn, however, that using these machines can be risky.

Remember the Fortune Favours the Brave commercial for Crypto.com featuring Matt Damon? Since the ad debuted last October, Bitcoin has declined in value by about 60%.

In other words, if you invested $1,000 following Damons advice, youd now have maybe $400 to show for it.

Crypto ATMs can be used to transfer money abroad, particularly to El Salvador, which made Bitcoin a national currency.

The industry says the machines allow people without bank accounts to dabble in digital currencies. On the other hand, critics said theyre also used by drug dealers and fraudsters to launder cash.

Were well aware that there is the belief that only criminals use it, that only nefarious activity and scam victims go to ATMs, said Seth Sattler, executive director of the Cryptocurrency Compliance Cooperative, an industry group.So as an industry, were trying actively to prevent that to the best of our ability.

Hays said there may be nothing untoward about most crypto transactions. But he emphasizes that largely unregulated digital currencies are often more like gambling than investing.

Putting your money into a Bitcoin ATM and hoping that youll watch the line go up and make bank, its equivalent to going into a casino, he said. Sure, you can make some money. But the odds are generally stacked against you.

Original post:
Cryptocurrency ATMs are popping up throughout California - NewsNation Now

Read More..

The 5 Types of Cryptocurrency To Look Out For – Patheos

Cryptocurrency is a form of digital currency that uses encryption to secure transactions. It can be used as a store of value or as an investment asset. Its also known as virtual money, or digital currency, and it could also be considered an asset class. The cryptocurrency was first introduced in 2009 by an anonymous person under the pseudonym of Satoshi Nakamoto. Satoshi designed the currency using cryptographic software and ensured that there would only be a limited amount of them produced to prevent inflation. However, there is no central authority to control the creation of new units. As their popularity increases, they are becoming more similar to real currencies.

There are different types of cryptocurrencies; however, they all have a few things in common.

The blockchain is used to ensure that each transaction is secure and recorded on a public ledger system. Each transaction can be verified by anyone who has access to the network through a specific node which corresponds with your wallet application on your smartphone device or laptop computer.

Bitcoin is a peer-to-peer payment system. Its decentralized, meaning that there is no central authority that oversees transactions and there are no intermediaries. In fact, it operates on a blockchain database, which is a public ledger of bitcoin transactions. Its also the first digital currency that gets exchanged for fiat money by cryptocurrency exchanges.

Dogecoin is an altcoin that was created as a joke. It was derived from the doge meme of Shiba Inu internet pop culture. This cryptocurrency was introduced in 2013 and its founder Jackson Palmer wanted to introduce a digital currency that wasnt too serious. However, this cryptocurrency got the attention of investors and has gained a huge amount of value in a short period. Check Dogecoin stock price atokx.com.

Ethereum was first described in a white paper by Vitalik Buterin, a computer programmer from Toronto, Ontario. It borrows heavily from Bitcoin and aims to do even more. Its like a supercomputer that can support a myriad of applications and services that dont necessarily need to be related to money. In fact, it can automate smart contracts where the terms of an agreement are encoded into lines of code that execute autonomously.

Litecoin was created by former Google employee Charles Lee in 2011. Its a fork of Bitcoin and its considered to be the silver to Bitcoins gold. Its faster than Bitcoin in transaction verification, and it also has a limit on the number of coins that can be mined. The more people adopt litecoin, the more the value rises since the supply is much smaller than Bitcoin.

Monero is a completely anonymous digital currency. All transactions are secured using stealth addresses and ring signatures to cover up the transaction amount and the sender and receiver of funds. Its also considered to be more secure, which makes it ideal for those who have a lot to hide.

Investors are still searching for the holy grail. In fact, there is no way to determine if there is any best. The best cryptocurrency for you may not necessarily be the best for another person. There are many factors to consider, such as your investment goals and risk tolerance. You should also look at the fundamentals of each coin, such as market cap, price volatility, trading volume, and potential applications that can be built on top of them.

Follow this link:
The 5 Types of Cryptocurrency To Look Out For - Patheos

Read More..