Page 1,850«..1020..1,8491,8501,8511,852..1,8601,870..»

Plant health index as an anomaly detection tool for oil refinery processes | Scientific Reports – Nature.com

PHI has been designed to capture and assess the condition of equipment during its life cycle. Thus, it may be utilized in data-driven condition-based maintenance and helps in predicting failures and malfunctions20.

Data Acquisition refers to collection of historical data for a long duration for training a predictive model under normal operating conditions. It is preferable that collected data contains various operating modes and may also include abnormal conditions and operational variations that result from, for example, aging of equipment, fouling, and catalyst deactivation.

The training datasets are collected in real-time directly from the sensors associated with the plant components. The datasets capture the three operational modes; i.e. startup mode, normal operating mode, and shutdown mode. These modes can be subdivided into more detailed modes in some circumstances.

Although the parameters possess a strong correlation, the time lag appears among them may lead to the inability to extract the relationship. The explanation for the time delay in parameters with physical relationships is that it takes time to reach a steady-state once certain changes occur and migrate from one portion to another. However, if the parameters have a strong association, if they change over time, the correlation coefficient may be modest, resulting in errors during the grouping procedure. We employed a dynamic window for sampling which examines the temporal lag among parameters to aid in the effective grouping of variables with a strong link.

The time lag was dealt with using cross correlation. For a delay duration of (t_{d}), Eq.(9) defines the coefficient for cross correlation between two parameters (A) ((a_{0}), (a_{1} , ldots , a_{M})) and (B) ((b_{0}), (b_{1} , ldots , b_{M}))21. The averages of (A) and (B) are (mu_{A}) and (mu_{A}), respectively.

$${upgamma }_{AB} left( {t_{d} } right) = frac{{mathop sum nolimits_{i = 0}^{M - 1} left( {a_{i} - {upmu }_{A} } right)*left( {b_{{i - t_{d} }} - {upmu }_{B} } right)}}{{sqrt {mathop sum nolimits_{i = 0}^{M - 1} left( {a_{i} - mu_{A} } right)^{2} } sqrt {mathop sum nolimits_{i = 0}^{M - 1} left( {b_{{i - t_{d} }} - mu_{B} } right)^{2} } }}$$

(9)

Grouping parameters aims to remove elements that don't provide meaningful data and to limit the number of parameters needed to adequately observe a component. The correlation coefficient employed as a reference for this grouping procedure is calculated for each pair of variables using Eq.(10), and if it exceeds a specified threshold, the variable is included in the training set; otherwise, it is discarded21.

$$rho_{AB} = frac{1}{M}mathop sum limits_{i = 0}^{M - 1} left( {frac{{a_{i} - {upmu }_{A} }}{{{upsigma }_{A} }}} right)left( {frac{{b_{i} - {upmu }_{B} }}{{{upsigma }_{B} }}} right)$$

(10)

where (rho_{AB}) is the correlation coefficient among (A) and (B), and (sigma_{A}) and (sigma_{B}) are their standard deviations.

There are three possible ways to group the parameters: Relational grouping (tags with the same patterns are grouped together), Manual grouping (each group possesses all of the tags), and Success Tree based grouping. The cut-off value of the correlation coefficients is known as group sensitivity. The grouping will become more precise if the group sensitivity is larger. When data is compressed during grouping, the Group Resolution (Shrink) feature is employed. If a tag has 1000 samples and the compression ratio is 100, the samples will be compressed to 100 and the missing information will be filled in by the Grid Size. Major significance of compression includes reduced data storage, data transfer time, and communication bandwidth. Time-series datasets frequently grow to terabytes and beyond. It is necessary to compress the datasets collected for attaining most effective model while preserving available resources.

Preprocessing of collected data is indispensable to ensure the accuracy of the developed empirical models, which are sensitive to noise and outliers. The selection of the sampling rate is also crucial, mainly because for the oil refinery processes the sampling rate (measurement frequency) is much faster than the process dynamics. In the current implementation, low pass frequency filtering with Fourier analysis was used to eliminate outliers, a 10min sampling rate was selected, and the compression rate (Group resolution or shrink) was set at 1000. Moreover, Kalman filter was applied to ensure robust noise distribution of collected data5. Another important preprocessing step is grouping. First, the useful information of the variables is grouped together. It helps to remove redundant variables that do not have useful information. It also reduces the number of variables required for monitoring the plant properly. Finally, the available information must be appropriately compressed via the transformation of high-dimensional data sets into low-dimensional features with minimal loss of class separability21. The maximum tags per group is limited to 51 in this simulation and success tree-based grouping is used in most of the cases. The minimum value of the correlation coefficient, (rho) is set to 0.20 and the group sensitivity was set to 0.90. Higher the group sensitivity will be more accurate the grouping.

Kernel regression is a well-known non-parametric method for estimating a random variable's conditional expectation22,23,24,25. The goal is to discover a non-linear relationship of the two random variables. When dealing with data that has a skewed distribution, the kernel regression is a good choice to use. This model determines the value of the parameter by estimating the exemplar observation and weighted average of historical data. The Kernel function is considered as weights in kernel regression. It is a symmetric, continuous, and limited real function that integrate to 1. The kernel function can't have a negative value. The NardarayaWatson estimator given by Eq.(11) is the most concise way to express kernel regression estimating (y) with respect to the input (x)21,23,24.

$$hat{y} = frac{{mathop sum nolimits_{i = 1}^{n} left[ {Kleft( {X_{i} - x} right)Y_{i} } right]}}{{mathop sum nolimits_{i = 1}^{n} Kleft( {X_{i} - x} right)}}$$

(11)

The selection of appropriate kernel for the situationislimited by practical and theoretical concerns. Reported Kernels are Epanechnikov, Gaussian, Quartic (biweight), Tricube (triweight), Uniform, Triangular, Cosine, Logistics, and Sigmoid 25. In the current implementation of PHI, three types of the kernel regression are provided: Uniform, Triangular, and Gaussian, which are defined as:

Uniform Kernel (Rectangular window): (Kleft( x right) = frac{1}{2}; where left| x right| le 1)

Triangular Kernel (Triangular window): (Kleft( x right) = 1 - left| x right|; where left| x right| le 1)

Gaussian Kernel: (Kleft( x right) = frac{1}{{sqrt {2pi } }}e^{{ - frac{{x^{2} }}{2}}})

The default is the Gaussian kernel which proved to be the most effective kernel for the current implementation.

PHI monitors plant signals, derives actual values of operational variables, compares actual values with expected values predicted using empirical models, and quantifies deviations between actual and expected values. Before positioning it to monitor plant operation, PHI should be first trained to predict the normal operating conditions of a process. Developing the empirical predictive model is based on a statistical learning technique consisting of an execution mode and a training mode. Methods and algorithms used in both modes of the PHI system are shown in Fig.9.

Algorithms of the PHI 26.

In the training mode, statistical methods are used to train the model using past operating data. The system identifies possible anomalies in operation for the execution mode by inspecting the discrepancies between values predicted by the empirical model and actual online measurements. For example, if a current operating condition approaches the normal condition, the health index is 100%. As opposed, if an operating condition approaches the alarm set point, the health index will be 0%. On the other hand, and in terms of process uncertainty, the health index is characterized by the residual deviations; the health index is 100% if a current operating condition is the same as the model estimate (i.e., the residual is 0.0), and is 0% if the operating conditions are far enough from the model estimate (i.e., residual is infinity). The overall plant index is a combination of the above two health indices. Details of the method are presented in21 and26 and presented as an improved statistical learning framework described below.

The framework of PHI is shown in Fig.10. The sequence of actions in the training mode is as follow:

Acquisition of historical data in the long term.

Data preprocessing such as filtering, signal compression, and grouping.

Development of the statistical model.

Evaluation of Health Index.

On the other hand, the sequence of actions in the execution mode is as follows:

Acquisition of real-time data.

Calculation of expected value from the model.

Calculation of residuals.

The decision of process uncertainty.

Calculation of PHI.

In the execution phase, first step is to gather real-time data from the sensor signals and compare this information with the model estimates. Based on the comparison, the residuals between the model estimates and the real time measurements are evaluated. These residuals are used to predict the abnormalities in the plant. Suppose that the online values are [11 12 13 9 15] and the model estimates [11 12 13 14 15], then the estimated residuals will be [0 0 0 5 0]. These values are used in evaluating the process uncertainty (healthiness) by applying Eq.(2). On the other hand, process margins refer to the differences between alarms/trips and the operational conditions, which are evaluated using Eq.(1). An early warning is generated when an abnormal process uncertainty is observed earlier than a process margin. The process margins and process uncertainties are combined in overall health indices using Eq.(3).

The PHI system has been developed using MATLAB. A modular approach has been used so that modifications may be easily introduced, and new algorithms may be added, integrated, and tested as independent modules. This approach was found quite appropriate for research and development purposes. Moreover, the PHI system is delivered as executable MATLAB files.

The main features and functionalities of PHI are (1) detecting the process uncertainty, in terms of a health index, for individual signals as well as for an entire plant, (2) warning anomalies in health indices, and (3) customized user interfaces and historians. Furthermore, since the PHI separately deals with safety-related and performance-related health indices, users can have appropriate decision-making in terms of their situation.

PHI system is a clientserver-based architecture, as shown in Fig.11. The server side is divided into the core modules necessary to build the PHI functionality and PRISM, a real-time BNF (Breakthrough and Fusion) technology database. The clients are divided into the standard client and the web-based client. Figure12 shows the main display of the PHI client. All of these functions bridge the information of the server-side with users.

Server architecture of the PHI system 26.

Example display of the PHI indicating the (a) overall plant health index and the health indices of the (b) reaction and (c) stripper sections.

The results of the PHI can be monitored through the client computer, which has the following main features:

Index display: the default display shows the index in percent of the topmost groups, including the trend. The index of other subsystems can be seen and accessed as well.

Success tree display: The success tree display having a hierarchical display and the group-wise display.

Trend display: A trend display showing the actual-expected value trend.

Alarms display: A grid-based alarm display showing the latest alarm on the top display.

Reports: Reports can be generated about the health status and regular alarm.

Configuration Manager: A configuration manager, which invokes at the beginning of the PHI Client application. The configuration manager checks for the port and the servers IP address; if not able to connect, the configuration manager window will pop up at the startup.

The rest is here:

Plant health index as an anomaly detection tool for oil refinery processes | Scientific Reports - Nature.com

Read More..

Technical Director Insights: Are We Delivering the Value of DSEA Information? – Society of Petroleum Engineers

Silviu Livescu, the technical director for SPEs Digital Science and Engineering Analytics (DSEA) discipline, asked me whether the unprecedented ability to identify underperforming wells and the rebound in wellhead oil and gas prices has led to a boom in well remediation activities (e.g., well cleanouts, reperforation, acidization, refracturing, and sidetracking).

To be honest, I dont know. But I dont think remediation activities have increased much over the prepandemic levels.

Rehabilitating underperforming or inactive wells, or strings, should be the normal response to relatively good product prices. The methodologies will vary from one operator to another, with the capabilities of the local service providers or the mobilization costs for state-of-the-art tools.

I have always been fascinated by the intellectual challenges around skin damage identification and business opportunities that cost-effective, low-risk remediation techniques represent for adding incremental asset value.

The insidious deterioration in inflow performance is often masked or complicated by other reservoir-related processes. Moreover, while the asset is contract-, facility- or equipment-limited, there is only a weak business case to maximize the inflow performance relationship or injectivity indices.

Moreover, there may be better, low-risk, modest-cost opportunities to optimize the production system using an integrated production model (IPM) or modern process controls. Nevertheless, the process of building the IPM will often result in an inventory of wells for which performance is deemed to be low or off. This inventory will include opportunities to add value by extending the plateau or accelerating resource recovery. Moreover, there is always a small element of capture in having the wells decline more acutely into the economic limit (Fig. 1).

DSEA algorithms offer a great way of ranking these value-creation opportunities and sequencing well-servicing campaigns. The ranking criteria will depend on the decision metrics used by the specific operators and their joint-venture partners and, possibly, based on the terms in the production-sharing agreement. This is a good reason for production and facilities generalists to keep abreast with modern decision analysis technologies via the SPE Management Technical Section.

My first inclination has always been to check for fill in underperforming producers and injectors with modest well inclinations. Accumulated fill can result in partial penetration skin or a reduction in the effectively drained permeability thickness (kh). Over my career, I have seen significant uplift from simple wellbore and perforation or sand-control device cleaning operations.

However, holdup depth measurements are quite an expensive exercise on high-angle or near-horizontal wells. Where equipment and crew mobilization costs dominate, problem investigation and diagnosis plans need to include a decision tree on what else should be done while the coiled tubing or downhole tractor is still on location, potentially including the entire first-stage remediation program.

Nevertheless, some of the dumbest decisions that I have made related to not putting enough thought into the design of solids, gunk, or scale cleanout operations include:

I suspect that there are now machine-learning-driven remediation program design tools to help the service company personnel avoid many of these stumbling blocks.

Moreover, we know that problems can be minimized by the operator funding a multifunctional exercise to run through the jobs on paper and conduct a hazard identification or risk mitigation exercise with the service company personnel and in-field job supervisors. (Obviously, this is even more effective if it builds on the lessons captured during the last campaign.) The expression an ounce of prevention is worth a pound of cure dates to the 13th century and was popularized by Benjamin Franklin in the 1700s.

Automated decline cure or rate-time analysis and integrated production performance models (e.g., PERFORM, PIPESIM, PROSPER, SAM, Wellflo, Wellflow, and WEM) can help identify these well-performance problems and the uplift opportunities. VOI analysis also can be preprogrammed and used to determine if a single or multirate test or production-logging operation will improve the probability of success or identifying the optimal pathway to the optimal remediation program.

The well-performance analysis tools can then be tuned to the actual flowing bottomhole pressures at the tested rates.

Fig. 2Simulating the effects of total effective skin (S = 5, 3, 0, and + 5) on the performance of a 10,000-ft total vertical depth gas well with 2.875-in. outside diameter tubing at two water/gas ratios (1 and 5 bbl/MMscf) producing against a backpressure of 500 psi.

The well shown in Fig. 2 had tested at 425 Mscf/D at 500 psi wellhead pressure with a total skin of +5 and a P skin of >1,000 psi. The reservoir management team (RMT) had expected a modest negative skin from this completion. This information begs lots of questions and seeds a brainstorming session by the RMT.

Note that the inflow performance ratios for this well are so steep that incremental compression will currently add only a modest uplift at the current reservoir pressures.

A remediation project should follow the normal project management process by weighing up the available technical options in Stage 2 (Fig. 3). This review needs to include the HSE, technical and commercial risks and evaluate the available risk-management options.

IHRDC

It is also important to recognize that the RMT may have set constraints on the maximum effective drawdown beyond the damaged zone, the optimal producing gas/oil ratios or water/oil ratios at any given cumulative production volume, or pattern voidage replacement ratios.

As Amir Al-Wazzan, production assurance manager for technology deployment and innovation at Dragon Oil, pointed out to me, the decision on whether to select a conventional or cutting-edge remediation technology will depend on the resources available from the local service providers and the costs and perceived risks involved in mobilizing, deploying, and using the latest technologies. In many cases, it is highly attractive to use familiar, proven technologies to clean out, revitalize, and re-equip idle wells or underperforming wells. Workovers may offer greater upside, but the options may be limited by the available budget and the perceived HSE or well integrity risks.

Pilot testing of new technologies needs to be benchmarked against providing an equal effort into optimizing and supervising well-established approaches for well remediation in a specific asset and region. Nevertheless, as Livescu suggested, data mining and modern data analysis techniques allow us to do a much better job in finding good field analogies where new approaches to well remediation and restimulation have proven to have a high probability of commercial success.

Dan Hill, professor at the College of Engineering at Texas A&M, said in the promotional SPE Live for his 20222023 Distinguished Lecture Acid Stimulation of Carbonate Formations: Matrix Acidizing or Acid Fracturing? that most carbonates are so acid soluble that achieving a negative skin should be relatively easy. His talk presents a relatively simple methodology for evaluating whether to attempt an acid fracture stimulation or a complex wormhole configuration with a well-executed matrix acid treatment. This decision depends on the rock properties and closure stress, which will vary with depth and with reservoir pressure decline.

I am eager for the opportunity to see the entire presentation and ask questions about

In Schlumbergers SPE Tech Talk New Single-Stage Sandstone Acidizing SolutionHigh Efficiency and Low Risk, Pedro Artola and Temi Yusuf pointed out that this proprietary one-step acid blend provides a low-risk, low-volume, cost-effective method to stimulate sandstone reservoirs containing less than 20% carbonates. The blend is specifically designed to avoid the overspending challenges with conventional low-strength, mud acid treatments at temperatures of up to 150C. A single fluid also helps reduce treatment complexity in multistage treatments in treating zones with more than 6 m of net pay, where diversion is required.

I have also found it quite challenging to identify the most cost-effective options for treating underperforming or damaged water-injection and saltwater disposal wells. The root causes on injection-well damage will generally depend on the water-plant and wellhead filtration strategy, the nature of any water treatment deficiencies, the nature and frequency of process upsets, and the corrosion or fluid-scaling tendencies that often vary over the life of the asset.

In well-established injection wells, this likely involves damage to the thermally induced fractures, rather than being restricted to the perforation tunnels. With thick pay zones, this poses significant treatment diversion challenges. Moreover, in high-angle or near-horizontal injection wells, there will likely be multiple, near-parallel fractures, only a few of which will control the well performance.

In any event, it is often necessary to conduct at least part of the treatment close to the pressure limits of the completion or wellhead and, potentially, at the fracture propagation pressure.

The SPE Production and Facilities Advisory Committee have been discussing the need for additional, vertically integrated, specialized events on well remediation to disseminate the latest experience and experience in pilot testing of new technologies. However, it is possible that these topics are adequately covered by presentations at SPE regional workshops or the larger sections (such as Saudi, Gulf Coast, Aberdeen/Stavanger, and Brazil) and at cross-functional events such as SPE's Annual Technical Conference and Exhibition, the International Petroleum Technology Conference, regional Offshore Technology Conference events, and Offshore Europe.

To avoid overlap in this content and competition for sponsorship dollars and quality papers and presentations, I have started a discussion chain on SPE Connect to solicit your input.

Follow this link:

Technical Director Insights: Are We Delivering the Value of DSEA Information? - Society of Petroleum Engineers

Read More..

I had an easier time during 2008: TikToker says theyve applied to 300 tech jobs and gotten 3 interviews – The Daily Dot

A TikToker said he applied to over 300 jobs but only received three interviews in the past two years, despite having 15 years of experience in his field, according to a video.

The TikToker, @hksmash, said he has taken all the Courseraan online certification platformcertifications for his career just for shits and gigs and has still only had three interviews, two of which he was ghosted on by the hiring manager.

The one interview I had this year, I actually got a job offer, he said in the video. Only for that offer to be rescinded because they no longer had budget for the role.

His post came in response to another TikToker saying that no one seemed to be having any luck getting hired right now.

In the comments, hksmash clarified he works in tech and IT support.

According to the U.S. Bureau of Labor Statistics, 5.9 million people were not in the labor force but wanted a job in July 2022, despite the narrative that there is a labor shortage. There were 10.7 million job openings during the month of June.

I had an easier time finding a job during the 2008 housing market crash, he added.

TikTok users in the comments vented about their frustration with the job market.

Ive heard the theory that its data mining, one commentor said. they say theyre hiring, when theyre actually not, to get data from the applications.

Had a feeling this was a tech thing, another wrote. Most companies are in hiring freezes or downsizing.

Bro, i just got rejected for an internal position for a teachable position because I dont have enough experience, another said.

*First Published: Aug 25, 2022, 3:32 pm CDT

Jacob Seitz is a freelance journalist originally from Columbus, Ohio, interested in the intersection of culture and politics.

Read the original post:

I had an easier time during 2008: TikToker says theyve applied to 300 tech jobs and gotten 3 interviews - The Daily Dot

Read More..

A comprehensive meta-analysis and prioritization study to identify vitiligo associated coding and non-coding SNV candidates using web-based…

Lee, H. et al. Prevalence of vitiligo and associated comorbidities in Korea. Yonsei Med. J. 56, 719725 (2015).

PubMed PubMed Central Google Scholar

Zhang, Y. et al. The prevalence of vitiligo: A meta-analysis. PLoS One 11, 9 (2016).

CAS Google Scholar

Kim, S.-K., Kwon, H.-E., Jeong, K.-H., Shin, M. K. & Lee, M.-H. Association between exonic polymorphisms of human leukocyte antigen-G gene and non-segmental vitiligo in the Korean population. Indian J. Dermatol. Venereol. Leprol. 16. https://doi.org/10.25259/IJDVL_219_2021 (2022). Epub ahead of print.

Article PubMed Google Scholar

Luiten, R. M. et al. Autoimmune destruction of skin melanocytes by perilesional T cells from vitiligo patients. J. Invest. Dermatol. 129, 22202232 (2009).

PubMed Google Scholar

Xuan, Y., Yang, Y., Xiang, L. & Zhang, C. The role of oxidative stress in the pathogenesis of vitiligo: A culprit for melanocyte death. Oxid. Med. Cell. Longev. 2022, 8498472. https://doi.org/10.1155/2022/8498472 (2022).

Article PubMed PubMed Central Google Scholar

Jain, A., Mal, J., Mehndiratta, V., Chander, R. & Patra, S. K. Study of oxidative stress in vitiligo. Indian J. Clin. Biochem. 26, 7881 (2011).

CAS PubMed Google Scholar

Katz, E. L. & Harris, J. E. Translational research in vitiligo. Front. Immunol. 12, 117 (2021).

Google Scholar

He, S., Xu, J. & Wu, J. The promising role of chemokines in vitiligo: From oxidative stress to the autoimmune response. Oxid. Med. Cell. Longev. 2022, 110 (2022).

CAS Google Scholar

Dong, J. et al. Interleukin-22 participates in the inflammatory process of vitiligo. Oncotarget 8, 109161109174 (2017).

PubMed PubMed Central Google Scholar

Chiarella, P. Vitiligo susceptibility at workplace and in daily life: The contribution of oxidative stress gene polymorphisms. Biomed. Dermatol. 3, 112 (2019).

Google Scholar

Gianfaldoni, S. et al. Vitiligo in children: A better understanding of the disease. Open Access Maced. J. Med. Sci. 6, 181184 (2018).

PubMed PubMed Central Google Scholar

Lu, T. et al. Vitiligo prevalence study in Shaanxi Province, China. Int. J. Dermatol. 46, 4751 (2007).

PubMed Google Scholar

Niu, C. & Aisa, H. A. Upregulation of melanogenesis and tyrosinase activity: Potential agents for vitiligo. Molecules 22, 8 (2017).

Google Scholar

Traks, T. et al. Polymorphisms in Toll-like receptor genes are associated with vitiligo. Front. Genet. 6, 278 (2015).

ADS PubMed PubMed Central Google Scholar

Huang, C. L., Nordlund, J. J. & Boissy, R. Vitiligo: A manifestation of apoptosis?. Am. J. Clin. Dermatol. 3, 301308 (2002).

CAS PubMed Google Scholar

Ruiz-Argelles, A., Brito, G. J., Reyes-Izquierdo, P., Prez-Romano, B. & Snchez-Sosa, S. Apoptosis of melanocytes in vitiligo results from antibody penetration. J. Autoimmun. 29, 281286 (2007).

PubMed Google Scholar

Saif, G. Bin & Khan, I. A. Association of genetic variants of the vitamin D receptor gene with vitiligo in a tertiary care center in a Saudi population: A case-control study. Ann. Saudi Med. 42, 96106 (2022).

PubMed PubMed Central Google Scholar

Becatti, M. et al. SIRT1 regulates MAPK pathways in vitiligo skin: Insight into the molecular pathways of cell survival. J. Cell. Mol. Med. 18, 514529 (2014).

CAS PubMed PubMed Central Google Scholar

Zhang, J. et al. Research Progress on Targeted Antioxidant Therapy and Vitiligo. Oxid. Med. Cell. Longev. 2022 (2022).

Amadi-myers, A. et al. Variant of. 112 (2010).

Tang, X. F. et al. Association analyses identify three susceptibility loci for vitiligo in the chinese han population. J. Invest. Dermatol. 133, 403410 (2013).

CAS PubMed Google Scholar

Jin, Y. et al. Genome-wide association analyses identify 13 new susceptibility loci for generalized vitiligo. Nat. Genet. 44, 676680 (2012).

CAS PubMed PubMed Central Google Scholar

Shajil, E. M., Chatterjee, S., Agrawal, D., Bagchi, T. & Begum, R. Vitiligo: Pathomechanisms and genetic polymorphism of susceptible genes. Indian J. Exp. Biol. 44, 526539 (2006).

CAS PubMed Google Scholar

Xu, M. et al. Genetic polymorphisms of GZMB and vitiligo: A genetic association study based on Chinese Han population. Sci. Rep. 8, 15 (2018).

ADS Google Scholar

ada, D. et al. Autoinflammation in addition to combined immunodeficiency: SLC29A3 gene defect. Mol. Immunol. 121, 2837 (2020).

PubMed Google Scholar

Wang, Y., Li, S. & Li, C. Perspectives of new advances in the pathogenesis of vitiligo: From oxidative stress to autoimmunity. Med. Sci. Monit. 25, 10171023 (2019).

CAS PubMed PubMed Central Google Scholar

Kahn, A. M. Surgical treatment of vitiligo [2]. Dermatol. Surg. 25, 669 (1999).

CAS PubMed Google Scholar

Gebert, M., Jakiewicz, M., Moszyska, A., Collawn, J. F. & Bartoszewski, R. The effects of single nucleotide polymorphisms in cancer rnai therapies. Cancers (Basel) 12, 120 (2020).

Google Scholar

Cantn, I. et al. A single-nucleotide polymorphism in the gene encoding lymphoid protein tyrosine phosphatase (PTPN22) confers susceptibility to generalised vitiligo. Genes Immun. 6, 584587 (2005).

PubMed Google Scholar

Gavalas, N. G. et al. Analysis of allelic variants in the catalase gene in patients with the skin depigmenting disorder vitiligo. Biochem. Biophys. Res. Commun. 345, 15861591 (2006).

CAS PubMed Google Scholar

He, J. et al. Lack of association between the 389C>T polymorphism (rs769217) in the catalase (CAT) gene and the risk of vitiligo: An update by meta-analysis. Australas. J. Dermatol. 56, 180185 (2015).

PubMed Google Scholar

Quan, C. et al. Genome-wide association study for vitiligo identifies susceptibility loci at 6q27 and the MHC. Nat. Genet. 42, 614618 (2010).

CAS PubMed Google Scholar

Ganguly, K. et al. Meta-analysis and prioritization of human skin pigmentation-associated GWAS-SNPs using ENCODE data-based web-tools. Arch. Dermatol. Res. 311, 163171 (2019).

PubMed Google Scholar

Giri, P. S., Begum, R. & Dwivedi, M. Meta-analysis for association of TNFA-308(G>A) SNP with vitiligo susceptibility. Gene 809, 146027 (2022).

CAS PubMed Google Scholar

Lee, I., Blom, U. M., Wang, P. I., Shim, J. E. & Marcotte, E. M. Prioritizing candidate disease genes by network-based boosting of genome-wide association data. Genome Res. 21, 11091121 (2011).

CAS PubMed PubMed Central Google Scholar

Shen, C. et al. Genetic susceptibility to vitiligo: GWAS approaches for identifying vitiligo susceptibility genes and loci. Front. Genet. 7, 112 (2016).

Google Scholar

Pyne, T. et al. Prioritization of human well-being spectrum related GWAS-SNVs using ENCODE-based web-tools predict interplay between PSMC3, ITIH4, and SERPINC1 genes in modulating well-being. J. Psychiatr. Res. 145, 92101 (2022).

Google Scholar

Rahman, M. H. et al. Bioinformatics methodologies to identify interactions between type 2 diabetes and neurological comorbidities. IEEE Access 7, 183948183970 (2019).

Google Scholar

Rahman, M. H. et al. A network-based bioinformatics approach to identify molecular biomarkers for type 2 diabetes that are linked to the progression of neurological diseases. Int. J. Environ. Res. Public Health 17, 125 (2020).

Google Scholar

Kanehisa, M., Furumichi, M., Sato, Y., Ishiguro-Watanabe, M. & Tanabe, M. KEGG: Integrating viruses and cellular organisms. Nucleic Acids Res. 49, D545D551 (2021).

CAS PubMed Google Scholar

Sanchez-Sosa, S., Aguirre-Lombardo, M., Jimenez-Brito, G. & Ruiz-Argelles, A. Immunophenotypic characterization of lymphoid cell infiltrates in vitiligo. Clin. Exp. Immunol. 173, 179183 (2013).

CAS PubMed PubMed Central Google Scholar

Deo, S., Bhagat, A. & Shah, R. Study of oxidative stress in peripheral blood of Indian vitiligo patients. Indian Dermatol. Online J. 4, 279 (2013).

PubMed PubMed Central Google Scholar

Wang, Q. et al. Stress-induced RNASET2 overexpression mediates melanocyte apoptosis via the TRAF2 pathway in vitro. Cell Death Dis. 5, e1022 (2014).

CAS PubMed PubMed Central Google Scholar

Wang, Q., Wang, X. & Xiang, L. Role and mechanism of RNASET2 in the pathogenesis of vitiligo. J. Investig. Dermatology Symp. Proc. 17, 4850 (2015).

Google Scholar

Caputa, G. et al. RNASET2 is required for ROS propagation during oxidative stress-mediated cell death. Cell Death Differ. 23, 347357 (2016).

CAS PubMed Google Scholar

Younus, H. Younus, H. (2018). Therapeutic potentials of superoxide dismutase. International journal of health sciences, 12(3), 88.. Int. J. Health Sci. (Qassim). 12, 8893 (2018).

Dwivedi, M. et al. Regulatory T cells in vitiligo: Implications for pathogenesis and therapeutics. Autoimmun. Rev. 14, 4956 (2015).

CAS PubMed Google Scholar

Mohammed, G. F. Highlights in pathogenesis of vitiligo. World J. Clin. Cases 3, 221 (2015).

PubMed PubMed Central Google Scholar

Bagheri Hamidi, A. et al. Association of MTHFR C677T polymorphism with elevated homocysteine level and disease development in vitiligo. Int. J. Immunogenet. 47, 342350 (2020).

CAS PubMed Google Scholar

Marzabani, R. et al. Metabolomic signature of amino acids in plasma of patients with non-segmental Vitiligo. Metabolomics 17, 111 (2021).

Google Scholar

Diao, J. S. et al. Aberrant Notch signaling: A potential pathomechanism of vitiligo. Med. Hypotheses 73, 7072 (2009).

CAS PubMed Google Scholar

Shin, M. K. et al. Association between CDK5RAP1 polymorphisms and susceptibility to vitiligo in the korean population. Eur. J. Dermatology 22, 495499 (2012).

Visit link:

A comprehensive meta-analysis and prioritization study to identify vitiligo associated coding and non-coding SNV candidates using web-based...

Read More..

Nearly 3 Years Later, SolarWinds CISO Shares 3 Lessons From the Infamous Attack – DARKReading

On Dec. 8, 2020, FireEye announced the discovery of a breach in the SolarWinds Orion software while it investigated a nation-state attack on its Red Team toolkit. Five days later, on Dec. 13, 2020, SolarWinds posted on Twitter, asking "all customers to upgrade immediately to Orion Platform version 2020.2.1 HF 1 to address a security vulnerability." It was clear: SolarWinds the Texas-based company that builds software for managing and protecting networks, systems, and IT infrastructure had been hacked.

More worrisome was the fact that the attackers, which US authorities have now linked to Russian intelligence, had found the backdoor through which they infiltrated the company's system about 14 months before the hack was announced. The SolarWinds hack is now almost 3 years old, but its aftereffects continue to reverberate across the security world.

Let's face it: The enterprise is constantly under threat either from malicious actors who attack for financial gains or hardened cybercriminals who extract and weaponize data crown jewels in nation-state attacks. However, supply chain attacks are becoming more common today, as threat actors continue to exploit third-party systems and agents to target organizations and break through their security guardrails. Gartner predicts that by 2025, "45% of organizations worldwide will have experienced attacks on their software supply chains," a prediction that has created a ripple across the cybersecurity world and led more companies to start prioritizing digital supply chain risk management.

While this is the right direction for enterprises, the question still lingers: What lessons have organizations learned from a cyberattack that went across the aisle to take out large corporations and key government agencies with far-reaching consequences even in countries beyond the United States?

To better understand what happened with the attack and how organizations can prepare for eventualities like the SolarWinds hack, Dark Reading connected with SolarWinds CISO Tim Brown for a deeper dive into the incident and lessons learned three years on.

Brown admits that the very name SolarWinds serves as a reminder for others to do better, fix vulnerabilities, and strengthen their entire security architecture. Knowing that all systems are vulnerable, collaboration is an integral part of the cybersecurity effort.

"If you look at the supply chain conversations that have come up, they're now focusing on the regulations we should be putting in place and how public and private actors can better collaborate to stall adversaries," he says. "Our incident shows the research community could come together because there's so much going on there."

After standing at the frontlines of perhaps the biggest security breach in recent years, Brown understands that collaboration is critical to all cybersecurity efforts.

"A lot of conversations have been ongoing around trust between individuals, government, and others," he says. "Our adversaries share information and we need to do the same."

No organization is 100% secure 100% of the time, as the SolarWinds incident demonstrated. To bolster security and defend their perimeters, Brown advises organizations to adopt a new approach that sees the CISO role move beyond being a business partner to becoming a risk officer. The CISO must measure risk in a way that's "honest, trustworthy, and open" and be able to talk about the risks they face and how they are compensating for them.

Organizations can become more proactive and defeat traps before they are sprung by using artificial intelligence (AI), machine learning (ML), and data mining, Brown explains. However, while organizations can leverage AI to automate detection, Brown warns there's a need to properly contextualize AI.

"Some of the projects out there are failing because they are trying to be too big," he says. "They're trying to go without context and aren't asking the right questions: What are we doing manually and how can we do it better? Rather, they're saying, 'Oh, we could do all of that with the data' and it's not what you necessarily need."

Leaders must understand the details of the problem, what outcome they are hoping for, and see if they can prove it right, according to Brown.

"We just have to get to that point where we can utilize the models on the right day to get us somewhere we haven't been before," he says.

IT leaders must stay a step ahead of adversaries. However, it's not all doom and gloom. The SolarWinds hack was a catalyst for so much great work happening across the cybersecurity board, Brown says.

"There are many applications being built in the supply chain right now that can keep a catalog of all your assets so that if a vulnerability occurs in a part of the building block, you will know, enabling you to assess if you were impacted or not," he says.

This awareness, Brown adds, can help in building a system that tends toward perfection, where organizations can identify vulnerabilities faster and deal with them decisively before malicious actors can exploit them. It's also an important metric as enterprises edge closer to the zero-trust maturity model prescribed by the Cybersecurity and Infrastructure Security Agency (CISA).

Brown says he is hopeful these lessons from the SolarWinds hack will aid enterprise leaders in their quest to secure their pipelines and remain battle-ready in the ever-evolving cybersecurity war.

Here is the original post:

Nearly 3 Years Later, SolarWinds CISO Shares 3 Lessons From the Infamous Attack - DARKReading

Read More..

Data classification: What it is and why you need it – ComputerWeekly.com

CIOs and IT directors working on any project that involves data in any way are always more likely to succeed when the organisation has a clear view of the data it holds.

Increasingly, organisations are using data classification to track information based on its sensitivity and confidentiality, as well as its importance to the business.

Data that is critical to operations or that needs to be safeguarded such as customer records or intellectual property is more likely to be encrypted, to have access controls applied, and be hosted on the most robust storage systems with the highest levels of redundancy.

AWS, for example, defines data classification as a way to categorise organisational data based on criticality and sensitivity in order to help you determine appropriate protection and retention controls.

However, data protection measures can be costly, in cash terms and potentially in making workflows more complex. Not all data is equal, and few firms have bottomless IT budgets when it comes to data protection.

But a clear data classification policy should ensure compliance and optimise costs and it can also help organisations make more effective use of their data.

Data classification policies are one of the Swiss Army knives of the IT toolbox.

Organisations use their policies as part of their business continuity and disaster recovery planning, including setting backup priorities.

They use them to ensure compliance with regulations such as GDPR, PCI-DSS and HIIPA.

These policies are fundamental to effective data security, setting rules for encryption, data access, and even who can amend or delete information.

Data classification policies are also a key part of controlling IT costs, through storage planning and optimisation. This is increasingly important, as organisations store their data in the public cloud with its consumption-based pricing models.

But it is also essential to match the right storage technologies to the right data, from high-performance flash storage for transactional databases, to tape for long-term archiving. Without this, firms cannot match storage performance, associated compute and networking costs, to data criticality.

In fact, with organisations looking to drive more value from their information, data classification has another role helping to build data mining and analytics capabilities.

The topic of data management has crept up in importance among the leadership teams of many organisations over the past few years, says Alastair McAulay, an IT strategy expert at PA Consulting.

There are two big drivers for this. The first driver is a positive one, where organisations are keen to maximise the value of their data, to liberate it from individual systems and place it where it can be accessed by analytics tools to create insight, to improve businesses performance.

The second driver is a negative one, where organisations discover how valuable their data is to other parties.

Organisations need to protect their data, not just against exfiltration by malicious hackers, but against ransomware attacks, intellectual property theft and even the misuse of data by otherwise-trusted third parties. As McAulay cautions, firms cannot control this unless they have a robust system for labeling and tracking data.

Effective data classification policies start out with the three basic principles of data management:

This CIA model or triad is most often associated with data security, but it is also a useful starting point for data classification.

Confidentiality covers security and access controls ensuring only the right people view data and measures such as data loss prevention.

Integrity ensures that data can be trusted during its lifecycle. This includes backups, secondary copies and volumes derived from the original data, such as by a business intelligence application.

Availability includes hardware and software measures such as business continuity and backup and recovery, as well as system uptime and even ease of access to the data for authorised users.

CIOs and chief data officers will then want to extend these CIA principles to fit the specific needs of their organisations and the data they hold.

This will include more granular information on who should be able to view or amend data, extending to which applications can access it, for example through application programming interfaces (APIs). But data classification will also set out how long the data should be retained for, where it should be stored, in terms of storage systems, how often it should be backed up, and when it should be archived.

A good data backup policy may well rely on a data map so that all data used by the organisation is located and identified and therefore included in the relevant backup process, says Stephen Young, director at data protection supplier AssureStor. If disaster strikes, not everything can be restored at once.

One of the more obvious data classification examples is where organisations hold sensitive government information. This data will have protective markings in the UK, this ranges from official to top secret which can be followed by data management and data protection tools.

Firms might want to emulate this by creating their own classifications, for example by separating out financial or health data that has to comply with specific industry regulations.

Or firms might want to create tiers of data based on their confidentiality, around R&D or financial deals, or how important it is to critical systems and business processes. Unless organisations have the classification policy in place, they will not be able to create rules to deal with the data in the most appropriate way.

A good data classification policy paves the way for improvements to efficiency, quality of service and greater customer retention if it is used effectively, says Fredrik Forslund, vice-president international at data protection firm Blancco.

A robust policy also helps organisations to deploy tools that take much of the overhead out of data lifecycle management and compliance. Amazon Macie, for example, uses machine learning and pattern matching to scan data stores for sensitive information. Meanwhile, Microsoft has an increasingly comprehensive set of labelling and classification tools across Azure and Microsoft 365.

However, when it comes to data classification, the tools are only as good as the policies that drive them. With boards increasing sensitivity to data and IT-related risks, organisations should look at the risks associated with the data they hold, including the risks posed by data leaks, theft or ransomware.

These risks are not static. They will evolve over time. As a result, data classification policies also need to be flexible. But a properly designed policy will help with compliance, and with costs.

There is no avoiding the fact that creating a data classification policy can be time-consuming, and it requires technical expertise from areas including IT security, storage management and business continuity. It also needs input from the business to classify data, and ensure legal and regulatory compliance.

But, as experts working in the field say, a policy is needed to ensure security and control costs, and to enable more effective use of data in business planning and management.

Data classification helps organisations reduce risk and enhance the overall compliance and security posture, says Stefan Voss, a vice-president at IT management tool company N-able. It also helps with cost containment and profitability due to reduction of storage costs and greater billing transparency.

Also, data classification is a cornerstone of other policies, such as data lifecycle management. And it helps IT managers create effective recovery time objectives (RTOs) and recovery point objectives (RPOs) for their backup and disaster recovery plans.

Ultimately, organisations can only be effective in managing their data if they know what they have, and where it is. As PA Consultings McAulay says: Tools will only ever be as effective as the data classification that underpins them.

See the original post:

Data classification: What it is and why you need it - ComputerWeekly.com

Read More..

‘A Historic Moment’: New Guidance Requires Federally Funded Research to Be Open Access – The Chronicle of Higher Education

In a move hailed by open-access advocates, the White House on Thursday released guidance dictating that federally funded research be made freely and immediately available to the public.

The Office of Science and Technology Policys guidance calls for federal agencies to make taxpayer-supported research publicly available immediately, doing away with an optional 12-month embargo. It also requires the data underlying that research to be published. Federal agencies have until December 31, 2025, to institute the guidance.

The American people fund tens of billions of dollars of cutting-edge research annually. There should be no delay or barrier between the American public and the returns on their investments in research, Alondra Nelson, head of the office, known as OSTP, said in a news release.

Heather Joseph, executive director of the Scholarly Publishing and Academic Resources Coalition, told The Chronicle that the announcement was extremely welcome news. The provision requiring data to be published, she said, is especially significant and will help boost scientific integrity and trust in science by allowing other scientists to validate researchers conclusions.

Nelsons memo outlining the guidance cites the Covid-19 pandemic as a powerful case study on the benefits of delivering research results and data rapidly to the people. At the outset of the pandemic, scholarly publishers lifted their paywalls for Covid-related articles and made research available in machine-readable formats, which Joseph said allowed scholars to use text- and data-mining, artificial-intelligence, and computational techniques on others work.

The new guidance expands on a 2013 memo issued by OSTP during the Obama administration. That memo applied only to federal agencies that fund more than $100 million in extramural research; the Biden memo has no such cap. That means that, for example, work funded by the National Endowment for the Humanities, which didnt meet the $100-million threshold in 2013, will for the first time be covered by federal open-access policy, Peter Suber, director of the Harvard Open Access Project, wrote on Twitter.

The Association of Research Libraries welcomed the expansion in a statement that described the memo as a historic moment for scientific communications.

Lifting the yearlong embargo that some journals have imposed on papers they publish will promote more equitable access to research, some said. The previous policy limited immediate equitable access of federally funded research results to only those able to pay for it or have privileged access through libraries or other institutions, two officials in the White House office wrote in a blog post. Financial means and privileged access must never be the prerequisite to realizing the benefits of federally funded research that all Americans deserve.

Thats a theme President Biden has championed for years. Thursdays White House news release quoted his remarks to the American Association for Cancer Research as vice president in 2016, when he criticized taxpayer-funded research that sits behind walls put up by journals subscription fees.

Sen. Ron Wyden, a Democrat from Oregon, released a statement praising the guidance for unlocking federally funded research from expensive, exclusive journals and calling it an astronomical win for innovation and scientific progress. (Wyden and a fellow Democratic senator, Ed Markey of Massachusetts, in February urged Nelson to establish an open-access policy.) And Michael Eisen, a co-founder of the open-access project PLOS, applauded the guidance on Twitter. The best thing I can say about this new policy, he wrote, is that publishers will hate it.

Its not clear how academic publishers, whose profits and business model will be affected, plan to adapt to the new guidelines. A spokesperson for Elsevier, a leading commercial publisher of academic journals, wrote in an email to The Chronicle that Elsevier actively supports open access to research and that 600 of its 2,700 journals are fully open-access (nearly all of the others, the spokesperson wrote, enable open-access publishing). We look forward to working with the research community and OSTP to understand its guidance in more detail.

Emails from The Chronicle to three other major academic publishers Springer Nature, Taylor & Francis, and Wiley did not draw an immediate response.

Some commentators worried that publishers would raise the article-processing charges, or APCs, associated with open-access publishing in their journals. But Joseph, of the academic-resources coalition, said she hopes language in the guidance that encourages measures to reduce inequities in publishing, particularly among early-career scholars and those from underserved backgrounds, will prevent that.

Those publishers that try to charge ridiculously high APCs will find it difficult, because inequity in publishing means Im priced out of being able to publish. I cant afford to contribute my research article to the scientific record, Joseph said. The White Houses blog post also noted that it was working to ensure support for more vulnerable members of the research ecosystem unable to pay rising costs associated with publishing open-access articles.

And authors have other options by which to make their work open, Joseph said. The guidance, she noted, allows authors to make their manuscripts freely available in an agency-designated repository even if its also published in a journal.

The National Institutes of Health, which finances more than $32 billion a year in biomedical research, promised on Thursday to comply with the new guidance. We are enthusiastic to move forward on these important efforts to make research results more accessible, and look forward to working together to strengthen our shared responsibility in making federally funded research results accessible to the public, Lawrence A. Tabak, acting director of the NIH, wrote in a statement.

Read more from the original source:

'A Historic Moment': New Guidance Requires Federally Funded Research to Be Open Access - The Chronicle of Higher Education

Read More..

Health insurance Market is expected to Reach USD 2541.78 Billion by 2029 at a Potential Growth rate of 4.6% – Insurance News Net

Health Insurance Market

Health insurance Market industry Analysis and Forecast 2029

PUNE, MAHARASHTRA, INDIA, August 25, 2022 /EINPresswire.com/ -- Health Insurance Market business report provides accurate market research that aids identify business areas that are performing well, those that need more attention, and also those that business should perhaps give up. If business has got their pulse on what customer is thinking, they can create products that solve their issues, reach out to them when they are most ready to listen, and help them become loyal ambassadors. The universal Global Health Insurance Market report makes it possible, to follow what customers are talking about, listening to them, and then delivering on their needs with its timely customer-cantered market research.

Health insurance policy consists of several types of features and benefits. It provides financial coverage to policyholder against certain treatment. Health insurance policy offers advantages including cashless hospitalization, coverage of pre and post-hospitalization, reimbursement, and various add-ons. Increasing costs for medical services and the growing number of day care procedures are some of the drivers boosting health insurance demand in the market. Data Bridge Market Research analyses that the health insurance market is expected to reach the value of USD 2,541.78 Billion by the year 2029, at a CAGR of 4.6% during the forecast period. "Corporates" accounts for the most prominent end-user segment in the respective market owing to rise in the demand for group health insurance by corporates. The market report curated by the Data Bridge Market Research team includes in-depth expert analysis, import/export analysis, pricing analysis, production consumption analysis, and climate chain scenario.

Health insurance is a type of insurance that provide the coverage of all type of surgical expenses as well as medical treatment incurred from the illness or injury. It applies to a comprehensive or limited range of medical services providing the coverage of full or partial costs of specific services. It provides financial support to the policy holder as it covers all the medical expenses when the policyholder is hospitalized for the treatment. It also covers pre as well as post hospitalization expenses.

Get PDF Sample of Global Health Insurance Market report: - https://www.databridgemarketresearch.com/request-a-sample/?dbmr=global-health-insurance-market

Drivers/Opportunities in the Health Insurance Market

Increasing Cost of Medical Services

Health insurance provides financial support in cases of serious sickness or accident. Increasing medical services costs for surgeries and hospital stays has created a new financial epidemic around the world. The cost of medical services is comprised of the cost of surgery, doctor fee, hospital stay cost, cost of the emergency room, diagnostic testing cost, among others. Therefore, this increase in the cost of medical services propels the growth of the market.

Growing Number of Daycare Procedures

Daycare procedures are those types of medical procedure or surgery that primarily requires less stay time in the hospitals. In the day-care procedure patients are required to stay in the hospital for a short period. Most of the health insurance companies are now covering the procedures of day-care in their insurance plans, and for the claim of such types of surgery, there is no compulsion on spending 24 hours in the hospital, which is the minimum stay in the hospital to claim insurance. While most of the health insurance plans cover hospital stays and major surgeries, the policyholders can also claim day-care procedures under their health insurance policy, which propels the demand of the market.

Mandatory Provision of Healthcare Insurance in Public and Private Sectors

Buying a healthcare insurance policy is a mandatory provision for the employees in the public as well as the private sector. Health insurance offers key medical benefits which the employee can avail of while working in a corporate. In case of any emergency or medical issues, the health insurance cover is highly useful to meet treatment expenses. The employees health insurance is an extended benefit, given by the individual employer to their employees. The health insurance provided not only covers the employee but also covers their family members under the same policy plan. Also, in certain cases, the employer may pay a part of a premium or insurance coverage of the health insurance policy.

Advantages of Health Insurance Policies

In the health insurance plans, the policyholder gets the reimbursement insured for their medical expenses such as hospitalization, surgeries, treatments that arise from the injuries. A health insurance policy is a type of agreement between the policyholder and insurance company, where the insurance company agrees to guarantee payment for the treatment costs in case of future medical issues, and the policyholder agrees to pay the amount of premium according to the insurance plan. Thus, the advantages of health insurance policies increases the growth opportunities for global health insurance market.

Increasing Healthcare Expenditure

Spending on health is growing faster around the world. According to the World Health Organization (WHO) report, global health spending has an upward trajectory growth. Global spending on health more than doubled over the past two decades, reaching USD 8.5 trillion in 2019, or 9.8% of global GDP. However, it was unequally distributed, with high-income countries accounting for approximately 80% of the worlds health spending. Health spending in low-income countries was financed primarily by out-of-pocket spending (OOPS; 44%) and external aid (29%), while government spending dominated in high-income countries (70%). Thus, the increasing healthcare expenditure is expected to act as opportunity in the global health insurance market.

Global Health Insurance Market Scope

The health insurance market is segmented on the basis of type, services, level of coverage, service providers, health insurance plans, demographics, coverage type, end user, and distribution channel. The growth amongst these segments will help you analyse meagre growth segments in the industries and provide the users with a valuable market overview and market insights to help them make strategic decisions for identifying core market applications.

Type

Product

Solutions

Services

Inpatient Treatment

Outpatient Treatment

Medical Assistance

Others

Level of Coverage

Bronze

Silver

Gold

Platinum

Service Providers

Private Health Insurance Providers

Public Health Insurance Providers

Health Insurance Plans

Point Of Service (POS)

Exclusive Provider Organization (EPOS)

Indemnity Health Insurance

Health Savings Account (HSA)

Qualified Small Employer Health Reimbursement Arrangements (QSEHRAS)

Preferred Provider Organization (PPO)

Health Maintenance Organization (HMO)

Others

Demographics

Adults

Minors

Senior Citizens

Coverage Type

Lifetime Coverage

Term Coverage

End User

Corporates

Individuals

Others

Distribution Channel

Direct Sales

Financial Institutions

E-Commerce

Hospitals

Clinics

Others

Gain More Insights into the Global Health Insurance Market Analysis, Browse Summary of the Research [emailprotected] https://www.databridgemarketresearch.com/reports/global-health-insurance-market

Health Insurance Market Regional Analysis/Insights

The health insurance market is analysed and market size insights and trends are provided by country, type, services, level of coverage, service providers, health insurance plans, demographics, coverage type, end user, and distribution channel as referenced above.

The countries covered in the health insurance market report are the U.S., Canada and Mexico, Germany, France, U.K., Netherlands, Switzerland, Belgium, Russia, Italy, Spain, Turkey, Rest of Europe in Europe, China, Japan, India, South Korea, Singapore, Malaysia, Australia and New Zealand, Thailand, Indonesia, Philippines, Hong Kong, Taiwan & Rest of Asia-Pacific, Saudi Arabia, United Arab Emirates, South Africa, Egypt, Israel, Rest of Middle East and Africa, Brazil, Argentina and Rest of South America.

North America dominates the health insurance market because of the high disposable income of consumers. North America is followed by Europe and is expected to witness significant growth during the forecast period of 2022 to 2029 due to growing demand for health insurance from corporates sector in the region. Europe is followed by Asia-Pacific and is expected to grow significantly owing to the rising awareness of benefits and advantages offered by health insurance plans.

Competitive Landscape and Health Insurance Market Share Analysis

The health insurance market competitive landscape provides details by competitor. Details included are company overview, company financials, revenue generated, market potential, investment in research and development, new market initiatives, global presence, production sites and facilities, production capacities, company strengths and weaknesses, product launch, product width and breadth, application dominance. The above data points provided are only related to the companies' focus related to health insurance market.

Some of the major players operating in the health insurance market are Bupa, Now Health International, Cigna, Aetna Inc. (a subsidiary of CVS Health), AXA, HBF Health Limited, Vitality (a subsidiary of Discovery Limited), Centene Corporation, International Medical Group, Inc. (a subsidiary of Sirius International Insurance Group Ltd.), Anthem Insurance Companies, Inc. (a subsidiary of Anthem, Inc.), Broadstone Corporate Benefits Limited, Allianz Care (a subsidiary of Allianz SE), HealthCare International Global Network Ltd, Assicurazioni Generali S.P.A., Aviva, Vhi Group, UnitedHealth Group, MAPFRE, AIA Group Limited, Oracle among others

Get TOC Detail of Global Health Insurance Market Report: - https://www.databridgemarketresearch.com/toc/?dbmr=global-next-generation-sequencing-ngs-market

Research Methodology: Global Health Insurance Market

Data collection and base year analysis is done using data collection modules with large sample sizes. The market data is analysed and estimated using market statistical and coherent models. Also market share analysis and key trend analysis are the major success factors in the market report. To know more please request an analyst call or can drop down your inquiry.

The key research methodology used by DBMR research team is data triangulation which involves data mining, analysis of the impact of data variables on the market, and primary (industry expert) validation. Apart from this, data models include Vendor Positioning Grid, Market Time Line Analysis, Market Overview and Guide, Expert Analysis, Import/Export Analysis, Pricing Analysis, Production Consumption Analysis, Climate Chain Scenario, Company Positioning Grid, Company Market Share Analysis, Standards of Measurement, Global versus Regional and Vendor Share Analysis. To know more about the research methodology, drop in an inquiry to speak to our industry experts.

Customization Available: Global Health Insurance Market

Data Bridge Market Research is a leader in advanced formative research. We take pride in servicing our existing and new customers with data and analysis that match and suits their goal. The report can be customized to include price trend analysis of target brands understanding the market for additional countries (ask for the list of countries), clinical trial results data, literature review, refurbished market and product base analysis. Market analysis of target competitors can be analysed from technology-based analysis to market portfolio strategies. We can add as many competitors that you require data about in the format and data style you are looking for. Our team of analysts can also provide you data in crude raw excel files pivot tables (Fact book) or can assist you in creating presentations from the data sets available in the report.

FREQUENTLY ASKED QUESTIONS

At what growth rate will the Health Insurance Market be projected to grow during the forecast by 2029?

What will be the Health Insurance Market value in the future?

What are the key opportunities of the Health Insurance Market?

Who are the major players operating in the Health Insurance Market?

Top Trending Reports: -

Asia-Pacific Health Insurance Market Industry Trends and Forecast to 2029

Europe Health Insurance Market - Industry Trends and Forecast to 2029

Middle East and Africa Health Insurance Market - Industry Trends and Forecast to 2029

North America Health Insurance Market - Industry Trends and Forecast to 2029

Singapore Private Health Insurance Market - Industry Trends and Forecast to 2029

About Data Bridge Market Research:

An absolute way to forecast what future holds is to comprehend the trend today! Data Bridge Market Research set forth itself as an unconventional and neoteric Market research and consulting firm with unparalleled level of resilience and integrated approaches. We are determined to unearth the best market opportunities and foster efficient information for your business to thrive in the market. Data Bridge endeavours to provide appropriate solutions to the complex business challenges and initiates an effortless decision-making process. Data Bridge is an aftermath of sheer wisdom and experience which was formulated and framed in the year 2015 in Pune.

Data Bridge Market Research has over 500 analysts working in different industries. We have catered more than 40% of the fortune 500 companies globally and have a network of more than 5000+ clientele around the globe. Data Bridge adepts in creating satisfied clients who reckon upon our services and rely on our hard work with certitude. We are content with our glorious 99.9 % client satisfying rate.

Sopan GedamData Bridge Market Research+1 888-387-2818email us here

Read more:

Health insurance Market is expected to Reach USD 2541.78 Billion by 2029 at a Potential Growth rate of 4.6% - Insurance News Net

Read More..

The Merge That Will Kill $19 Billion Worth Industry – Analytics India Magazine

Blockchain technology based on the consensus mechanism called proof-of-work (PoW) has been sucking the energy out of the world. The problem with this concept is so huge that China banned crypto mining last year and countries like Kosovo and Kazakhstan are pushing miners out of the business by cutting-off their electricity. The PoW mechanism of blockchain looks so bad that if not checked, the world will run out of energy: this is what the detractors say about blockchain.

Bitcoin and Ethereum are the two of the worlds biggest blockchain-based tech behemoths which use the PoW consensus mechanism. It is the method that allows the decentralised network to reach an agreement, or agree on things like account balances and transaction sequence.

But, Ethereum seems to be done with PoW now. The Ethereum Foundation announced that they will be shifting to the proof-of-stake consensus mechanism, a selective method of choosing a participant who will solve the equations and accurately validate the data.

They have decided a name and date for it. Its called Merge and the transition will happen in two phases between September 6 and September 20. The shift from PoW to PoS might be good for the platform and the world, but it has created huge uncertainties for miners.

Mining is the process of adding valid blocks to the chain. Those who do this job are called miners.

Currently, miners are able to produce fresh Ether (ETH) by committing a significant amount of processing power. However, following the Merge, network usersknown as validatorswould be required to stake significant sums of already-existing ETH in order to validate blocks, generating new ETH and receiving stake rewards.

Ethereum mining is estimated to be worth $19 billion, and Ethereum Merge is expected to instantly put an end to the entire sector. Its conceivable that Ethereum miners wont be able to make a profit mining alternative cryptocurrencies with low market value. With the exception of Ethereum, less than 2% of all altcoins have a market cap that can be mined with a GPU (graphics processing units).

GPU (graphics processing unit) rigswhich are more adaptable than those used for bitcoin mining and simpler to modify to mine for other coinspredominate in ethereum mining. GPU rigs can be utilised for both gaming and mining compatible cryptocurrencies, including Ethereum Classic, Ravencoin, and Ergo.

Ether (ETH) mining is popular due to its profitability, and a shift to mining other cryptocurrencies could result in a short-term decrease in revenues. Furthermore, a rapid influx of huge mining pools to a different coin may strain margins for incumbents. Beyond Ethereum Classic, ether miners using ASIC (application-specific integrated circuits) equipment are left with few options.

Ethereum Classic (ETC) miners may benefit the most from the changeover. This is because there will most likely be a flood of used mining rigs accessible from ETH miners who have chosen to become validators on Ethereum 2.0.

In order to obtain salvage value, miners may distribute their rigs across various networks that support GPU mining as well as other applications after Ethereum switches to PoS, according to the note, which also predicted that mining pools will likely transition relatively smoothly as the switchover approaches.

Experts say that large-scale miners intend to change their strategy and concentrate on high-performance computers and a business centred around data centres. Resources can be pooled by miners to support Web3 protocols like Render Network and Livepeer. As the miners depart, there is concern about rising selling pressure on Ethereum.

See the original post here:

The Merge That Will Kill $19 Billion Worth Industry - Analytics India Magazine

Read More..

Parents Wary of Digital Hall Pass That Records Students’ Movements – The Epoch Times

In the old days, when a student requested a bathroom visit during classroom time, the teacher handed them a hall pass made of paper or sometimes a wooden block.

Now, many schools are taking the hall pass digital with SmartPass, a hall pass application that tracks students and generates weekly reports about trends in their movements.

Students are getting smarter and sneakier than ever, and you may not know that Betty and Bob have been meeting up every afternoon for the last two weeks, but SmartPass knows, an advertising video directed at school administrators says, while explaining a new feature called Encounter Detection. Behind the scenes we are using artificial intelligence to see which students have passes to similar rooms around the same time, and presenting you with actionable data to start a conversation.

For example, the data may show that two students from different classrooms asked to go to the water fountain five times last week, and their hall pass time overlapped for a few minutes each time.

The weekly report also shows administrators how many total hall passes were generated, a list of student names and profiles ranking students from the highest number of passes used, how many teachers approved passes and how many encounters were prevented. That is, how many times students were prevented from having a pass at the same time.

Fewer students in hallways and bathrooms at one time means fewer chances for fights or vandalism.

While SmartPass doesnt track every step a student takes, it does capture every time a student signs in and out of a classroom and generates reports from the data it captures.

Not all parents are in favor of the app. The board of the Stroudsburg Area School District in Monroe County, Pennsylvania, voted last week to start using SmartPass in its schools.

Parent Michelle Grana spoke to the board before the decision.

I said Im upset about the lack of privacy involved, and with the data going into private corporate hands, and how their education is being outsourced to tech companies, Grana told The Epoch Times about her comments to the school board. Youre literally checking on underage students using a natural, private environment. Thats gross.

Grana said she believes teachers should be better trained in managing classrooms and children with behavioral needs, instead of looking to technology devices to aid in disciplinary actions.

In response to the decision to use SmartPass, which she called the straw that broke the camels back, Grana has removed her children, including a high school senior, from the school.

Children should not be tracked to go to the bathroom, Grana said.

Shannon Grady, chair of the local Moms for Liberty chapter in Chester County, Pennsylvania, calls the app an invasion of privacy and says her group is preparing non-consent forms for parents to tell schools they will opt-out of using such technology.

Its just another example where a third party has our childrens data. Schools are basically letting a third party track your child, Grady told The Epoch Times. Theyre gathering all this data on our kids, and then those companies are using it for profit. Theyre data mining on our children, whether its in the form of surveys, whether its in the form of this.

Cosmas Curry, superintendent of Stroudsburg Area School District, says the app is more efficient than using paper, cuts down on classroom interruptions, and has safety benefits.

Theres no big deal here, Curry told The Epoch Times. He explained that students can set a meeting with a teacher in another classroom later in the week without leaving their current room to arrange a hall pass. It reduces traffic in the halls. And in case of emergency, students with an app on the phone could check in.

For example, in a fire drill, when administrators are counting students to make sure everyone is out of the school, with the paper system, there could be many hall passes issued by many teachers. With 90 teachers in Stroudsburg High School, imagine if each one had written a hall pass before the fire drill, he said. They would have to compare 90 papers and find out where the students were last seen.

But with SmartPass, Curry said, all the information is in one place and the school can more quickly account for everyone. Plus, kids who ended up walking out with another class can check in and provide their location.

SmartPass limits the number of students allowed in the hallway at once by creating a digital waiting list. If the maximum number of students are already in the hall, the app tells students who requested a pass when it is their turn.

Curry emphasized that no child has been, nor would be, denied use of the bathroom.

I just think that people have got to give it a chance and understand what it means, Curry said.

The Epoch Times requested comment from SmartPass. The company did not respond by press time.

More than 200,000 students and school faculty throughout the United States use the app, which costs $2 per student, according to a 2021 report from 6abc Action News in Philadelphia.

SmartPass can be installed on students phones or school tablets or laptop computers. During COVID-19 mitigation, some schools used SmartPass for contact tracing.

Grady worries that if students become accustomed to scanning their phones or logging into the app as children, they wont object to being tracked as adults.

Youre training my child to just scan it, scan it, scan it, so then they get comfortable being tracked and traced every move, Grady said. Its conditioning.

Curry says students and adults are already being tracked, but not by schools.

The superintendent said its somewhat hypocritical for parents to object to SmartPass when they allow their children to have smartphones.

If a child has a cell phone, theyre already being tracked, Curry said. Whether its Google, TikTok, Instagram, Twitterlike, who are they kidding? Were not doing that.

Follow

Beth Brelje is an investigative journalist covering Pennsylvania politics, courts, and the commonwealths most interesting and sometimes hidden news. Send her your story ideas: Beth.brelje@epochtimes.us

Follow this link:

Parents Wary of Digital Hall Pass That Records Students' Movements - The Epoch Times

Read More..