Category Archives: Data Mining
Postdoctoral Research Fellow in Graph Neural Networks Theories … – Times Higher Education
The Role
An exciting opportunity for a researcher exists in the School of Computing at Macquarie University. We are seeking a qualified Postdoctoral Research Fellow to join a team with an international reputation for producing high quality research outcomes in the area of Graph Neural Networks Theories and Applications.
You will undertake research related to the projects GNNs in Social Event Detection and Online Social Network Analysis. The successful applicant will be expected to supervise Higher Degree Research students as well as leading and collaborating on research projects in Computer Science. Candidates with relevant research experience in Graph Neural Networks, Graph Data Mining, Social Events and Missing Information are strongly encouraged to apply. The candidates with relevant research experience in AI/ML and cybersecurity are also encouraged to apply.
This position will be jointly supported by Macquarie University and Data61. The successful candidate will demonstrate an excellent track record in research, as well as personal and professional skills that can enable service and leadership contributions that will help to strengthen the academic impact and reputation. The successful applicant will work closely with Prof. Jian Yang, Prof. Michael Sheng and Dr Jia Wu in the School of Computing, Macquarie University, Sydney, Australia; also with Dr Surya Nepal, Dr Cecile Paris and Dr Sharif Abuadbba from Data61.
About Us
The Faculty of Science and Engineering is a young and dynamic faculty driven by the spirit of discovery. Here we nurture the next generation of science and technology innovators. From a handful of students on the University's first day, the Faculty now serves nearly 6000 students across all our programs.
The School of Computing is the home of 43 academic staff and more than 100 research students, and an ever-growing cohort of undergraduates and postgraduate coursework students. The School offers a broad range of cutting edge undergraduate and postgraduate courses and enjoys a strong research ethos.
To Apply
To be considered for this position, applicants must address the selection criteria below, and attach your CV (including a list of two referees), a statement of research interests and achievements.
Selection Criteria
Enquiries: Professor Jian Yang at jian.yang@mq.edu.au or Dr Sharif Abuadbbaat sharif.abuadbba@data61.csiro.au
Applications close: Tuesday, 31 October 2023 at 11:59PM (AEST)
Diversity and Inclusion
Macquarie University's goal is to increase the proportion of women, Aboriginal and Torres Strait Islander peoples, and diverse peoples of all backgrounds in Science, Technology, Engineering, Mathematics and Medicine (STEMM) roles. Applications are considered holistically, and candidates are encouraged to apply even if they may not feel strong on every individual selection criterion. For more information, please visit our Equity, Diversity and Inclusion page https://staff.mq.edu.au/work/diversity-inclusion
Flexible Work
At Macquarie, we believe by providing flexibility in when, where and how work is done, we can support our staff to manage their personal commitments, while optimising their work performance and contributions to the University. See how we lead in flexible work to enable an outcome focused and inclusive workplace. To learn more about our culture and hiring process, explore our Candidate Resource Hub.
See the original post here:
Postdoctoral Research Fellow in Graph Neural Networks Theories ... - Times Higher Education
D.C. Tax Revision Commission considers digital advertising tax and … – Lexology
On September 13, 2023, the D.C. Tax Revision Commission met and evaluated over a dozen tax proposals. Most concerning, the Commission discussed the possibility of implementing a digital advertising tax or a data mining tax.
D.C. Tax Revision Commission
The Council of the District of Columbia established the Commission to comprehensively review the Districts tax code. The Commissions mandate is to make tax policy recommendations on:
By the end of 2023, the Commission is set to submit its slate of recommendations to the Council, along with specific steps for implementing the recommendations, such as draft legislation and regulations.
The Commission previously released reports in 1998[2] and 2014.[3] Each Commissions report made recommendations for changes to the Districts tax system. For example, in 1998, the Commission recommended a 1.5% business activities tax and taxing sales of tangible products to District residents the same regardless of whether they are sold remotely or by District-based businesses. In 2014, the Commission recommended reducing the Districts business franchise tax rate from 9.975% to 8.25%.
D.C. Tax Revision Commission proposals
Throughout 2023, the Commission has met with various tax and fiscal policy experts, as well as community and industry representatives. It has prepared a series of proposals to review and potentially suggest to the Council. On September 13, 2023, the Commission met for the first time to discuss these various proposals. In advance, the Commission released the list of proposals that they would review.
The first proposal on the list was Strengthen and clarify taxation of digital ads and services. The Commission released a proposal paper, elaborating on the topic. The proposal paper specifically listed as options: (1) a digital advertising tax act similar to Marylands enactment; and (2) a tax on the extraction of consumer data by tech platforms in much the same way that states tax the extraction of valuable commodities like fossil fuels or precious metals. The second option would be a per-consumer excise tax that accounts for each user whose data is being mined.
The sordid history of state digital advertising taxes
Maryland became the first and only state in the United States to impose a tax on gross receipts from digital advertising services in 2022. The tax is imposed on gross revenues derived from digital advertising services in Maryland at graduated rates, from a minimum rate of 2.5% to a maximum rate of 10% of such revenues. The Maryland digital advertising generated immediate controversy, with taxpayers challenging the tax in state[4] and federal courts on federal statutory and constitutional grounds.[5] While those challenges have yet to result in a final nonappealable decision on the legality of the Maryland tax, we expect the Maryland Supreme Court to eventually reach the merits and, hopefully, find the tax to be unconstitutional. And while several other states have considered Maryland-style digital advertising tax legislation, those proposals ultimately have been rejected.
With the history of Marylands digital advertising tax in mind, the Commissions proposal paper acknowledged that any potential D.C. tax similar to Marylands tax would likely face similar legal challenges. But the paper noted that it may be able to minimize challenges by, for example, imposing a tax on all advertising (as opposed to just digital advertising). There could then be less risk that the tax would violate the federal Internet Tax Freedom Act. But imposing a broad advertising tax could prove difficult for the Council based on recent experience. In 2020, the Council considered expanding its sales tax base to include sales of advertising services and personal information, as part of its Fiscal Year 2021 Budget Support Act of 2020. Ultimately, the D.C. Council opted not to pursue expanding its sales tax base to these sales due to the adverse impact on local media and press, on top of the likely litigation that would follow if a targeted tax on digital advertising were adopted.[6]
The Commission meeting on September 13th
At the meeting, the Commission discussed the first batch of proposals but without formally voting on them, which will happen in a later meeting. Rather, the meeting was intended to advance discussion in preparation for later formal voting.
The Commissions members were generally interested in pursuing a digital advertising tax or a data mining tax. The members saw these taxes as being worth consideration because of the potentially large amount of revenue they could generate. One member supported the taxes because he saw them as an expansion of the tax base to match changing technology, rather than a tax rate increase.
However, there was hesitancy among the Commissions members because of the ongoing Maryland litigation and potential tax implementation difficulties. The Commissions members noted that the process to pass and implement these taxes could be long, especially as they were interested in whether the Maryland tax would survive legal scrutiny.
The Commission also considered a variety of other proposals. For example, the Commissions members reacted negatively to increasing the general sales tax rate from 6 percent to 7 percent due to regressivity concerns. However, they were supportive of eliminating the motor vehicle excise tax exemption for electric vehicles.
Future meetings and next steps
The Commission currently has scheduled five additional proposal review sessionsSeptember 19th, September 26th, October 10th, October 20th, and October 24th. These sessions will shape which recommendations the Commission will make to the D.C. Council. Of particular interest for the September 19th meeting, the Commission likely will discuss a business activity tax proposal, which may resemble the Texas franchise (margin) tax and the Oregon corporate activity tax. Eversheds Sutherland will continue to follow these review sessions that will color the next decade of District taxation.
View post:
D.C. Tax Revision Commission considers digital advertising tax and ... - Lexology
Pursue A Masters In Data Science With The 3rd Best Online Program – KDnuggets
Sponsored Content
Data science teams need general industry experts who understand data science and technical specialists who can make it happen. Bay Path University will provide you with a career path in data science, regardless of your background and experience. We were one of the first institutions to develop two tracks to complete the Master of Science (MS) in Applied Data Science degree,which is right for you?
Generalist Track -This track prepares students to be well-rounded, collaborative, and skilled data scientists and analysts regardless of their background or area of expertise. Coursework in this track provides the foundation needed for breaking into the fast-growing field of data science.
Specialist Track -This track prepares students to take on more technical roles on data science teams, such as data modeler, data mining engineer, or data warehouse architect.
"I gained practical experience applying a diverse set of data science skills and was able to build a diverse portfolio."
Aspen Gulley G'23
Our MS in Applied Data Science Degree Program Provides:
See the original post here:
Pursue A Masters In Data Science With The 3rd Best Online Program - KDnuggets
Working with Big Data: Tools and Techniques – KDnuggets
Long gone are times in business when all the data you needed was in your little black book. In this era of the digital revolution, not even the classical databases are enough.
Handling big data became a critical skill for businesses and, with them, data scientists. Big data is characterized by its volume, velocity, and variety, offering unprecedented insights into patterns and trends.
To handle such data effectively, it requires the usage of specialized tools and techniques.
No, its not simply lots of data.
Big data is most commonly characterized by the three Vs:
All the big data characteristics mentioned impact the tools and techniques we use to handle big data.
When we talk about big data techniques, they are simply methods, algorithms, and approaches we use to process, analyze, and manage big data. On the surface, they are the same as in regular data. However, the big data characteristics we discussed call for different approaches and tools.
Here are some prominent tools and techniques used in the big data domain.
What is it?: Data processing refers to operations and activities that transform raw data into meaningful information. It tasks from cleaning and structuring data to running complex algorithms and analytics.
Big data is sometimes batch processed, but more prevalent is data streaming.
Key Characteristics:
Big Data Tools Used: Apache Hadoop MapReduce, Apache Spark, Apache Tez, Apache Kafka, Apache Storm, Apache Flink, Amazon Kinesis, IBM Streams, Google Cloud Dataflow
Tools Overview:
What is it?: ETL is Extracting data from various sources, Transforming it into a structured and usable format, and Loading it into a data storage system for analysis or other purposes.
Big data characteristics mean that the ETL process needs to handle more data from more sources. Data is usually semi-structured or unstructured, which is transformed and stored differently than structured data.
ETL in big data also usually needs to process data in real time.
Key Characteristics:
Big Data Tools Used: Apache NiFi, Apache Sqoop, Apache Flume, Talend
Tools Overview:
Data provenance tracking
Extensible architecture with processors
Supports data provenance
Extensible with a wide range of processors
Parallel import/export
Compression and direct import features
Parallel import/export
Incremental data transfer capabilities
Reliable and durable data delivery
Native integration with Hadoop ecosystem
Fault-tolerant architecture
Extensible with custom sources, channels, and sinks.
Broad connectivity to databases, apps, and more
Data quality and profiling tools
Graphical interface for designing data integration processes
Supports data quality and master data management
What is it?: Big data storage must store vast amounts of data generated at high velocities and in various formats.
The three most distinct ways to store big data are NoSQL databases, data lakes, and data warehouses.
NoSQL databases are designed for handling large volumes of structured and unstructured data without a fixed schema (NoSQL - Not Only SQL). This makes them adaptable to the evolving data structure.
Unlike traditional, vertically scalable databases, NoSQL databases are horizontally scalable, meaning they can distribute data across multiple servers. Scaling becomes easier by adding more machines to the system. They are fault-tolerant, have low latency (appreciated in applications requiring real-time data access), and are cost-efficient at scale.
Data lakes are storage repositories that store vast amounts of raw data in their native format. This simplifies data access and analytics, as all data is located in one place.
Data lakes are scalable and cost-efficient. They provide flexibility (data is ingested in its raw form, and the structure is defined when reading the data for analysis), support batch and real-time data processing, and can be integrated with data quality tools, leading to more advanced analytics and richer insights.
A data warehouse is a centralized repository optimized for analytical processing that stores data from multiple sources, transforming it into a format suitable for analysis and reporting.
It is designed to store vast amounts of data, integrate it from various sources, and allow for historical analysis since data is stored with a time dimension.
Key Characteristics:
Big Data Tools Used: MongoDB (document-based), Cassandra (column-based), Apache HBase (column-based), Neo4j (graph-based), Redis (key-value store), Amazon S3, Azure Data Lake, Hadoop Distributed File System (HDFS), Google Big Lake, Amazon Redshift, BigQuery
Tools Overview:
What is it?: Its discovering patterns, correlations, anomalies, and statistical relationships in large datasets. It involves disciplines like machine learning, statistics, and using database systems to extract insights from data.
The amount of data mined is vast, and the sheer volume can reveal patterns that might not be apparent in smaller datasets. Big data usually comes from various sources and is often semi-structured or unstructured. This requires more sophisticated preprocessing and integration techniques. Unlike regular data, big data is usually processed in real time.
Tools used for big data mining have to handle all this. To do that, they apply distributed computing, i.e., data processing is spread across multiple computers.
Some algorithms might not be suitable for big data mining, as it requires scalable parallel processing algorithms, e.g., SVM, SGD, or Gradient Boosting.
Big data mining has also adopted Exploratory Data Analysis (EDA) techniques. EDA analyzes datasets to summarize their main characteristics, often using statistical graphics, plots, and information tables. Because of that, well talk about big data mining and EDA tools together.
Key Characteristics:
Big Data Tools Used: Weka, KNIME, RapidMiner, Apache Hive, Apache Pig, Apache Drill, Presto
Tools Overview:
What is it?: Its a graphical representation of information and data extracted from vast datasets. Using visual elements like charts, graphs, and maps, data visualization tools provide an accessible way to understand patterns, outliers, and trends in the data.
Again, the characteristics of big data data, such as size and complexity, make it different from regular data visualization.
Key Characteristics:
Big Data Tools Used: Tableau, PowerBI, D3.js, Kibana
Tools Overview:
Big data is so similar to regular data but also completely different. They share the techniques for handling data. But due to big data characteristics, these techniques are the same only by their name. Otherwise, they require completely different approaches and tools.
If you want to get into big data, youll have to use various big data tools. Our overview of these tools should be a good starting point for you.Nate Rosidi is a data scientist and in product strategy. He's also an adjunct professor teaching analytics, and is the founder of StrataScratch, a platform helping data scientists prepare for their interviews with real interview questions from top companies. Connect with him on Twitter: StrataScratch or LinkedIn.
Read this article:
Scientists suggest use of data-driven approach to look for life on other planets – Phys.org
This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
preprint
trusted source
proofread
close
A large team of scientists with a wide variety of backgrounds has joined together to suggest that a data-driven approach to search for life elsewhere in the universe should replace methods now in use. In their paper posted on the arXiv preprint server, the group explains how a data-driven approach could help prevent human-centered biases from overlooking potential signs of life.
Over the past few decades, scientists have become much more open to the possibility of discovering life in places other than on Earth. And because of that, more work has been done to find lifeor at least signs of it. But, as the group on this new effort points out, most such approaches tend to expect that other forms of life will resemble those found on Earth. And that could be blinding scientists to signs of life that might be there but are being missed.
To overcome such a problem, the researchers suggest a more data-driven approach be used. They note that a lot of data have been obtained regarding various parts of the night sky. They also note that the data are in different formats. Some are radio wave graphs, while others describe the attributes of light emitted by a section of the sky, or even a given planet.
Such data, they suggest, are reminiscent of the mass of data that can be found on the internet that relates to various activities, mostly by humans. They further note that such data have been used to find out new things, to spot trends, or even to make up new things that might be related to other thingsthrough the use of AI applications.
This, they argue, suggests that a similar approach could be used to study the huge amounts of data that have been accumulated by decades of space research. Doing so, they further argue, could help find things that stand out in ways that have not been noticed before. A given star system, for example, may have unique attributes that scientists have not yet noticed because they have been looking for signs that relate to the types of life we expect to see.
They conclude by suggesting specific types of data-driven research be conducted on data that currently exist, such as that collected by all-sky surveys done using far-infrared wavelength devices. Another approach could involve data mining on the results of large sky surveys, or of surveys that were done with radio astronomical interferometers, or even research that involved known artifacts in the solar system.
More information: T. Joseph W. Lazio et al, Data-Driven Approaches to Searches for the Technosignatures of Advanced Civilizations, arXiv (2023). DOI: 10.48550/arxiv.2308.15518
Journal information: arXiv
2023 Science X Network
The rest is here:
Scientists suggest use of data-driven approach to look for life on other planets - Phys.org
TSX snaps losing streak, led by tech and mining shares – Reuters
The Art Deco facade of the original Toronto Stock Exchange building is seen on Bay Street in Toronto, Ontario, Canada January 23, 2019. REUTERS/Chris Helgren/File Photo Acquire Licensing Rights
Sept 11 (Reuters) - Canada's main stock index rose on Monday, snapping a four-day losing streak, as technology and mining shares benefited from a pickup in investor sentiment ahead of U.S. inflation data this week.
The Toronto Stock Exchange's S&P/TSX composite index (.GSPTSE) ended up 108.11 points, or 0.5%, at 20,182.76.
"It looks like a risk-on day when you see tech rallying, you see some of the base metals moving and the Canadian dollar is rallying as well," said Philip Petursson, chief investment strategist at IG Wealth Management.
The Canadian dollar extended its recovery from a five-month low as an increase in risk appetite weighed on the safe-haven U.S. dollar (.DXY).
Wall Street also notched gains as investors awaited U.S. inflation data on Wednesday.
"The inflation data later this week out of the United States will probably bring a bit of a dampener on the rally," Petursson said. "I expect inflation to move higher on a year-over-year basis and that might just reignite thoughts that the Fed isn't done (raising interest rates)."
Money markets expect the Federal Reserve to leave its policy rate on hold at a policy decision later this month but the following meeting in November is seen as a closer call.
The Toronto market's technology sector rallied 1.6%, helped by a gain of 14% for BlackBerry Ltd (BB.TO), recouping much of last week's sharp decline.
Heavily weighted financials also advanced, rising 1%, and the materials group, which includes precious and base metals miners and fertilizer companies, added 1.2% as copper prices jumped.
Energy was a laggard, falling 1.9%, as the recent rally in oil lost some momentum. Oil settled down 0.3% at $87.29 a barrel.
Reporting by Fergal Smith in Toronto and Khushi Singh; Editing by Tasim Zahid and Sandra Maler
Our Standards: The Thomson Reuters Trust Principles.
The rest is here:
TSX snaps losing streak, led by tech and mining shares - Reuters
Australia, Canada, and US merge geo data – North of 60 Mining News
Part of a larger collaboration to better understand the critical minerals potential across Australia, Canada, and the United States, the federal geological surveys from the three countries have merged national-scale geological, geophysical, and mineral resource information into a single dataset that is expected to enhance critical mineral discovery.
"Geology doesn't stop at the border and neither does our data," said U.S Geological Survey scientist Anne McCafferty, who led the data compilation effort. "Scientists will now be able to look at geological and geophysical data seamlessly across both Canada and the United States, as well as make direct comparisons to Australia."
The tri-national geologic, geophysics, and mineral resource data release includes more than 40 earth science data layers, including a new map of variations in the Earth's natural magnetic field for the entirety of Canada and the U.S that geoscientists can use to develop three-dimensional representations of geology underground.
"These standardised datasets, spanning the continents of Australia and North America, enable unique investigations into the controls on critical mineral distribution which were simply not possible beforehand," said Geoscience Australia scientist Karol Czarnota. "I'm sure they will be a foundation for ongoing research and innovation for many years to come."
The tri-national geo-data release is part of the Critical Minerals Mapping Initiative, a partnership forged in 2019 to support the establishment of a diversified supply of critical minerals in Australia, Canada, and the U.S.
"This joint data release speaks to our continued commitment to improve scientific cooperation and data sharing between the U.S. Geological Survey, Geoscience Australia, and the Geological Survey of Canada," said Geological Survey of Canada scientist Christopher Lawley.
America's heavy dependence on countries like China for critical minerals and an overall lack of knowledge about many of these elements needed for clean energy, high-tech devices, and military hardware was a primary driver behind the formation of CMMI.
In addition to data-sharing, CMMI combines the expertise brought to the table by each of the nation's geological teams.
"Because each country has expertise in different fields, bringing all of these experts together can create a strong foundation of mineral information that can be used by policy makers, resource-managers, industry and others to help meet the needs of all three countries' economies and security," USGS penned in a 2020 announcement of the critical minerals collaboration.
This geological partnership will also help Australia, Canada, and the U.S. coordinate supply chains for the minerals critical to the clean energy goals and manufacturing sectors in all three countries.
U.S. Geological Survey
A team led by the Geological Survey of Canada used the tri-national geophysics datasets created under CMMI and machine-learning techniques to map Mississippi Valley-style zinc deposits in Australia and North America.
One way the combined geological information has already been leveraged is as a much larger and more diverse dataset for artificial intelligence and machine learning mineral exploration techniques.
A CMMI team led by the Geological Survey of Canada has already used the data layers to map the potential for certain types of zinc deposits in all three nations.
The sediment-hosted zinc deposits this investigation focused on often host gallium and germanium, a pair of technology metals important to chipmaking that have gained attention due to China's dominance in their supply and restrictions on exports.
The international team of geoscientists is using similar techniques to model the potential for other critical mineral systems across Australia, Canada, and the U.S.
Over his more than 15 years of covering mining and mineral exploration, Shane has become renowned for his ability to report on the sector in a way that is technically sound enough to inform industry insiders while being easy to understand by a wider audience.
Email: [emailprotected] Phone: (907) 726-1095https://www.linkedin.com/in/shane-lasley-ab073b12/
Read the rest here:
Australia, Canada, and US merge geo data - North of 60 Mining News
Lessons From the Tech World that Apply to Accounting Firms – CPAPracticeAdvisor.com
By Davis Bell.
I love talking to people who work in different industries than the one in which I work tech because they see the world differently, which enables me to learn new things from them that I can apply to my work.
One example of this from the accounting profession: Matthew May, one of the co-founders of Acuity, a national accounting firm, recently stated during a panel discussion at an accounting conference that his firm had learned a lot of the processes for lead generation and prospect data mining from a software company located near his firm. Matthews firm borrowed the tactics of this software company to successfully grow its database of leads
I thought it might be useful for folks in the accounting industry to hear a few lessons from the tech world that can be applied to accounting:
How to attract and retain top talent in a labor constrained market:
The accounting industry is facing a significant shortfall of qualified workers. This enables employees to be choosy about where they work, which means in turn theyre after more than just competitive pay and benefits. For managers who began their careers a decade or more ago, this requires changing their paradigm of what employees want. This is a challenge we in the tech industry have faced for quite some time, especially for technical roles like engineering and product management. Some tech companies thought the answer was simply perks Massages! Ping pong! Bring your dog to work! But in my experience, what workers actually want are career development, opportunities to learn and upskill, talented co-workers, hard problems to solve, and a culture of openness and collaboration. Time spent talking to employees and recruits to figure out what matters to them and how you can deliver those things is time well spent.
Quantitative data is key to success.
Clear and real-time analytics are essential to the health of a tech company, accounting firms, and really any business. Without the right numbers, its impossible to see exactly what is happening from an operations and workflow standpoint in any business and that is no exception in your firm or in a tech company.
Two types of metrics to consider from the technology space are the development of a common set of OKRs (objectives and key results) plus key performance indicators (KPIs) which should be used to drive the strategic vision of the business and/or accounting firm.
One of the reasons that quantitative data is so vital is that it removes the ambiguity of the actions necessary. For example, you may think theres a problem based on qualitative data (client issues, staff complaints, and workflow misfires) but you also need to back up assumptions with actual objective metrics as well. In addition, having numerical objectives and key results will make you much more focused. The same goes for KPIs.
If you need some guidance on which KPIs to track, consider the following: client profitability, client retention, new client growth rate, job profitability, employee productivity, and engagement or service profitability.
Focus on the customers you want.
When developing apps and technology solutions, the most successful companies focus on the ideal client profile (ICP). Its important to drill down on the very specific characteristics of your ICP and to cater towards them, rather than trying to be all things to all people and get distracted by people who arent your ICP.
By understanding and aligning your product and services with your ICP, you can allow your team to truly create exemplary client experiences and uncover new ways to make your firm better. Regularly asking for feedback and being open to the good, the bad, and the ugly is another best tech company practice you can use.
Simplify and streamline your tech stack.
From my perspective as the leader of a technology company, this may seem counterintuitive, but when it comes to tech I do believe less is more. I liken appropriate technology usage in a business as a road with a cliff on both sides.
On one side of the road is a cliff formed in response to highly manual processes that do not leverage the powerful technology now available to accounting firms and other businesses. On the other side of the road the cliff is formed when a company has too many solutions. Firms on this side of the road are spending a ton of time and money on technology for the sake of it without getting the full return on investment out of it. If you go too far in one direction or the other it becomes a disaster.
The important takeaway here is to practice proactive technology hygiene where you can evaluate what is working and what is not, then take action to make changes as you see fit and match your tech stack to the objectives you are trying to meet.
-
Following these four lessons from tech companies when it comes to your firm is a positive step toward building an optimum practice playbook. Just like a tech company, accounting firms struggle with staffing, streamlining tech, establishing OKRs and tracking formal KPIs, all while trying to deliver a seamless and secure client experience. Its a tall order, but the more we can learn from each other and leverage different business strategies the stronger well all be in the future.
====
Davis Bell is the CEO of, Canopy, a leading cloud-based practice management platform for tax professionals. He has held leadership roles in strategy, operations, and finance at a succession of SaaS companies. Hes passionate about building innovative software that delights users.
Read more:
Lessons From the Tech World that Apply to Accounting Firms - CPAPracticeAdvisor.com
Review of visual analytics methods for food safety risks | npj Science … – Nature.com
Jacxsens, L., Uyttendaele, M. & De Meulenaer B. Challenges in risk assessment: quantitative risk assessment. Procedia food Sci. 6, 2330 (2016).
Article Google Scholar
Wang, X., Bouzembrak, Y., Lansink, A. O., Van der, H. J. & Fels-Klerx Application of machine learning to the monitoring and prediction of food safety: A review. Compr. Rev. Food Sci. Food Safe. 21, 416434 (2022).
Article Google Scholar
Jin, C. Y. et al. Big Data in food safety-A review. Curr. Opin. Food Sci. 36, 2432 (2020).
Article Google Scholar
Zhou, L., Zhang, C., Liu, F., Qiu, Z. J. & He, Y. Application of Deep Learning in Food: A Review. Compr. Rev. Food Sci. Food Safe. 18, 17931811 (2019).
Article Google Scholar
Chen, W., Zhao, Y., Zhang, S. & Lu, A. D. Introduction to Visualization (Higher Education Press, Beijing, 2020).
Munzner, T. Visualization Analysis and Design (CRC Press, Boca Raton, 2014).
Chen, Y., Zhang, Q. H., Guan, Z. L., Zhao, Y. & Chen, W. GEMvis: a visual analysis method for the comparison and refinement of graph embedding models. Vis. Comp 38, 34493462 (2022).
Article Google Scholar
Wu, C. X. et al. VizOPTICS: Getting insights into OPTICS via interactive visual analysis. Comput. Electr. Eng. 107, 108624 (2023).
Article Google Scholar
Goyal, K., Kumar, P. & Verma, K. Food Adulteration Detection using Artificial Intelligence: A Systematic Review. Arch. Comput. Methods Eng. 29, 397426 (2022).
Article Google Scholar
Deng, X., Cao, S. & Horn, A. L. Emerging applications of machine learning in food safety. Annu. Rev. Food Sci. Technol. 12, 513538 (2021).
Article PubMed Google Scholar
Wheeler, N. E. Tracing outbreaks with machine learning. Nat. Rev. Microbiol. 17, 269 (2019).
Article CAS PubMed Google Scholar
Du, Y. & Guo, Y. C. Machine learning techniques and research framework in foodborne disease surveillance system. Food Control 131, 108448 (2022).
Article Google Scholar
Wu, Y. N. & Chen, J. S. Food safety monitoring and surveillance in China: past, present and future[J]. Food Control 90, 429439 (2018).
Article Google Scholar
Tao, D. D., Yang, P. K. & Feng, H. Utilization of text mining as a big data analysis tool for food science and nutrition. Compr. Rev. Food Sci. Food Saf. 19, 875894 (2020).
Article PubMed Google Scholar
Thomas, J. J. & Cook, K. A. Illuminating the Path: The Research and Development Agenda for Visual Analytics (Pacific Northwest National Lab, Richland (2005).
Joanes, D. N. & Gill, C. A. Comparing measures of sample skewness and kurtosis. J. R. Stat. Soc. 47, 183189 (1998).
Google Scholar
International Organization for Standardization (ISO) 22000 Food Safety Plain English Dictionary. http://praxiom.com/iso-22000-definitions.htm.
Marvin, H. J. P., Janssen, E. M., Bouzembrak, Y., Hendriksen, P. J. M. & Staats, M. Big data in food safety: An overview. Crit. Rev. Food Sci. 57, 22862295 (2016).
Article Google Scholar
Steinberger, R., Pouliquen, B. & Goot, E. V. D. An introduction to the Europe Media Monitor family of applications. Proceedings of the Special Interest Group on Information Retrieval 2009 workshop (Boston, United States, 2013).
De Mauro, A., Greco, M. & Grimaldi, M. What is big data? A consensual definition and a review of key research topics. AIP Conf. Proc. 1644, 97104 (2014).
Article Google Scholar
Rodgers, J. L. & Nicewander, W. A. Thirteen ways to look at the correlation coefficient. Am. Stat. 42, 5966 (1988).
Article Google Scholar
Zhang, J. R. et al. Bioavailability and soil-to-crop transfer of heavy metals in farmland soils: A case study in the Pearl River Delta, South China. Environ. Pollut. 235, 710719 (2018).
Article CAS PubMed Google Scholar
Sheng, Z. Probability Theory and Mathematical Statistics (High Education Press, Beijing, 2010).
Wu, W. et al. Successive projections algorithmmultivariable linear regression classifier for the detection of contaminants on chicken carcasses in hyperspectral images. J. Appl. Spectrosc. 84, 535541 (2017).
Article CAS Google Scholar
Agrawal, R. & Srikant, R. Fast algorithms for mining association rules. Proceedings of the 20th International Conference on Very Large Data Bases, San Francisco, CA, United States (1994).
Wu, X. D. et al. Top 10 algorithms in data mining. Knowl. Inf. Syst. 14, 137 (2008).
Article Google Scholar
Cazer, C. L. et al. Shared multidrug resistance patterns in chicken-associated Escherichia coli identified by association rule mining. Front. Microbiol. 10, 687 (2019).
Article PubMed PubMed Central Google Scholar
Wang, J. & Yue, H. Food safety pre-warning system based on data mining for a sustainable food supply chain. Food Control 73, 223229 (2017).
Article Google Scholar
Jacobsen, H. &, Tan, K. H. Improving food safety through data pattern discovery in a sensor-based monitoring system. Prod. Plan. Control 33, 111 (2021).
Wu, Y. N., Liu, P. & Chen, J. S. Food safety risk assessment in China: Past, present and future. Food Control 90, 212221 (2018).
Article Google Scholar
Chen, Y., Liu, Y., Chen, X. R. & Liu, R. J. Simulation and assessment method for pesticide residue pollution based on visual analysis techniques. Comput. Simula. 34, 347351 (2017).
Google Scholar
Soon, J. M., Davies, W. P., Chadd, S. A. & Baines, R. N. A Delphi-based approach to developing and validating a farm food safety risk assessment tool by experts. Expert Syst. Appl. 39, 83258336 (2012).
Article Google Scholar
Su, K. et al. Water quality assessment based on Nemerow pollution index method: A case study of Heilongtan reservoir in central Sichuan province, China. PLoS one 17, e0273305 (2022).
Article CAS PubMed PubMed Central Google Scholar
Fu, J. et al. Heavy metals concentrations characteristics and risk assessment of edible mushrooms. J. Chin. Inst. Food Sci. Tech. 19, 230237 (2019).
Google Scholar
Yu, Z. et al. Contamination and risk of heavy metals in soils and vegetables from zinc smelting area. China Environ. Sci. 39, 257273 (2019).
Google Scholar
Tanima, C. & Madhusweta, D. Sensory assessment of aromatic foods packed in developed starch based films using fuzzy logic. Intern. J. Food Stud. 4, 2948 (2015).
Article Google Scholar
Wei, Z. S., Ma, X. P., Zhan, P., Tian, H. L. & Li, K. X. Flavor quality assessment system of Xinjiang milk knots by using SOM neural network and the fuzzy AHP. Food Sci. Nutr. 8, 20882093 (2020).
Article PubMed PubMed Central Google Scholar
Chen, Y., Chen, X. R., Chang, Q. Y. & Fan, C. L. A multi-factor comprehensive method based on the AHP-E model for evaluating pesticide residue pollution. J. Agro-Environ. Sci. 38, 276283 (2019).
Google Scholar
Ma, B. et al. Risk early warning and control of food safety based on an improved analytic hierarchy process integrating quality control analysis method. Food Control 108, 106824 (2020).
Article CAS Google Scholar
Wang, X. F., Chen, Y. & Sun, Y. H. Comprehensive evaluation model of heavy metal pollution in meat products based on best-worst method and entropy method. Food Mach. 37, 8086 (2021).
Google Scholar
Ma, Y. J., Hou, Y. Y., Liu, Y. S. & Xue, Y. H. Research of food safety risk assessment methods based on big data. IEEE International Conference on Big Data Analysis (2016).
Han, Y. M., Cui, S. Y. & Geng, Z. Q. Food quality and safety risk assessment using a novel HMM method based on GRA. Food Control 105, 180189 (2019).
Article CAS Google Scholar
Gao, Y. N., Wang, W. Q. & Wang, J. X. A food safety risk prewarning model using LightGBM integrated with fuzzy hierarchy partition: a case study for meat products. Food Sci. 42, 197207 (2021).
Google Scholar
Wang, H. X., Cui, W. J., Guo, Y. C., Du, Y. & Zhou, Y. C. Machine learning prediction of foodborne disease pathogens: Algorithm development and validation study. JMIR Med. Inf. 9, e24924 (2021).
Article Google Scholar
Jensen, F. V. & Nielsen, T. D. Bayesian Networks and Decision Graphs (Springer, New York, 2007).
Achumba, I., Azzi, D., Ezebili, I. & Bersch, S. Approaches to Bayesian Network Model Construction (IAENG Transactions on Engineering Technologies, Springer, Dordrecht, 2013).
Sun, J., Sun, Z. & Chen, X. Fuzzy Bayesian network research on knowledge reasoning model of food safety control in China. J. Food, Agric. Environ. 11, 234243 (2013).
Google Scholar
Bouzembrak, Y., Camenzuli, L., Janssen, E. & Fels-Klerx, H. J. V. D. Application of Bayesian Networks in the development of herbs and spices sampling monitoring system. Food Control 83, 3844 (2018).
Article Google Scholar
Bouzembrak, Y. & Marvin, H. J. P. Impact of drivers of change, including climatic factors, on the occurrence of chemical food safety hazards in fruits and vegetables: A Bayesian Network approach. Food Control 97, 6776 (2019).
Article Google Scholar
Marvin, H. J. P. & Bouzembrak, Y. A system approach towards prediction of food safety hazards: Impact of climate and agrichemical use on the occurrence of food safety hazards. Agr. Syst. 178, 102760 (2020).
Article Google Scholar
Benitez, J. M. & Castro, J. L. Are artificial neural networks black boxes? IEEE T. Neural Net. 8, 11561164 (1997).
Article CAS Google Scholar
Guan, C. & Yang, Y. Research of extraction behavior of heavy metal Cd in tea based on backpropagation neural network. Food Sci. Nutr. 8, 10671074 (2020).
Article CAS PubMed PubMed Central Google Scholar
Deng, Y., Xiao, H. J., Xu, J. X. & Wang, H. Prediction model of PSO-BP neural network on coliform amount in special food. Saudi J. Biol. Sci. 26, 11541160 (2019).
Article PubMed PubMed Central Google Scholar
Wang, X. Y., Zuo, M., Xiao, K. J. & Liu, T. Data mining on food safety sampling inspection data based on BP neural network. J. Food Sci. Tech. 6, 8590 (2016).
Google Scholar
Bai, B. G., Zhu, H. L. & Fan, Q. X. Application research of BP neural network in dairy product quality and safety risk. China Dairy Ind. 48, 4245+57 (2020).
Google Scholar
Zhang, D. B., Xu, J. P., Xu, J. J. & Li, C. G. Model for food safety warning based on inspection data and BP neural network. Tran. CSAE 26, 221226 (2010).
Continue reading here:
Review of visual analytics methods for food safety risks | npj Science ... - Nature.com
Librem 11 tablet sets new standard for privacy and security with … – Help Net Security
Purism introduced the new Librem 11 tablet running secure PureBoot and Linux kernel-based PureOS. Librem 11 is made for individuals, organizations, government agencies, law enforcement agencies, and businesses that need security and privacy with powerful portability.
The Librem 11 tablet runs PureBoot and PureOS, offering security and privacy features that include:
Secure boot process. The Librem 11 powers on with PureBoot for the most secure boot process on the market.
Linux kernel-based OS. The Librem 11 runs PureOS, an open-source, peer-reviewable, and fully verifiable operating system.
No surveillance or data mining. Purism avoids surveillance and data mining business practices, ensuring all your personal identifying information is kept under your control.
True OS convergence. PureOS on the Librem 11 demonstrates real convergence, where the tablet becomes more than just a tablet. It becomes a full-featured computer that can act like a desktop when connected to a monitor, keyboard, mouse, or even a laptop connected to the included keyboard sleeve.
Secure supply chain. Purism offers Anti-Interdiction services for Librem 11, a service for those facing extreme security threats or just want peace of mind that their device has not been tampered with before, during, or after the manufacturing and shipping process, ensuring the most secure tablet on the market today.
Malware protection. The Librem 11 tablet works with the Librem Key used to identify tampering when used in combination with PureBoot cryptographically.
Librem 11 offers powerful features that include:
Originally posted here:
Librem 11 tablet sets new standard for privacy and security with ... - Help Net Security