18 Moonshot Investments In The Nascent Blockchain Ecosystem

While writing the article “4 Crypto Mining Stocks Positioned To Outperform Bitcoin in 2021”, linked below, I came across 18 additional publicly-traded cryptocurrency mining/blockchain sector…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Analysis of research articles

Is a discussion about the challenges faced by data scientists in concern to the exponential amounts of data they work with in their fields.

Specifically:

Here are some of the concerns

Experiment’s withheld in computer simulations generate so much data per year that the data becomes so large and varied that data scientists are having difficulty doing their job. A quote from the article reads “In astronomy and particle physics, these new experiments generate petabytes (1 petabyte = 1015 bytes) of data per year. In bioinformatics, the increasing volume (3) and the extreme heterogeneity of the data are challenging scientists” (2009. Bell, Hey, Szalay)

This is a huge amount of data per year for scientists to analyse and draw their findings from, not to mention store; for reproducibility sake.

The article discusses solutions to this issue, one example was the GrayWulf database design which emphasises high speed access to the data. This system managed to cut the time it took to execute a query from days on a traditional system to minutes on the new.

“GrayWulf won the Storage Challenge at the SC08 conference (9) by executing a query on the Sloan Digital Sky Survey (SDSS) database in 12 minutes; the same task took 13 days on a traditional (nonparallel) database system.” (2009. Bell, Hey, Szalay)

Such improvements changed the way researchers have approached data intensive science because it could be done so much faster.

The article continues to discuss future solutions or ways we can do data intensive science in the future. An interesting quote about a “Cloud Service” Type approach reads

“These “cloud services” provide high bandwidth access to cost-effective storage and computing services. However, there are no clear examples of successful scientific applications of clouds yet; making optimum use of such services will require some radical rethinking in the research community” (2009. Bell, Hey, Szalay)

Which suggests that because of the efficiency and accessibility of cloud services, data intensive science might see a shift to this platform. However, as of the writing of the article (2009), such application in practice would require the research community to fundamentally rethink the way they approach data intensive science.

I feel as though this source — AASS is credible source, its peer reviewed, concerned about proper methodology and aims to be unbiased as possible.

“Science, also widely referred to as Science Magazine, is the peer-reviewed academic journal of the American Association for the Advancement of Science (AAAS) and one of the world’s top academic journals.”

I feel this particular article is trustworthy as it’s published in the science mag and has included multiple sources for where it’s information has been sourced. However the article came out in 2009 so it is dated. The part which discusses rethinking how we do science in the near future might not be relevant now

Other tony hey articles:

The Fourth Paradigm: Data-Intensive Scientific Discovery

This article discussions the different “Shifts” in data analytics throughout the years and specifically the shift towards 3.0 we are currently going through

1.0 “Business intelligence”

The first shift in data analytics concerned the adaptation of technology into business. This was a time where data analysis was difficult because it was painstakingly slow to undertake due to the adaptation of this kind of technology and the .This kind of analytics done was more concerned about things which happened in the past as opposed to creating interpreted information to be used in the future.

“This was the era of the enterprise data warehouse, used to capture information, and of business intelligence software, used to query and report it.”

2.0 “Big data:

Around the mid 2000’s, internet firms such as google had gathered masses of data which could be used in new and exciting ways. The now coined “Big Data” movement in analytics led to data as a service type interfaces eg. “Jobs you may be interested in”, “People you might now”

Technologies such as the un-relational database system NOSQL and cloud services have been employed in trying to store and analyze “Big Data”

3.0 “ The future”

Not only should data firms be able to services/products from big data analyses, everyone should! 3.0 is concerned about how we can use the digital footprint of every modern device/service to be at benefit to consumers and products.

Data analytics uses traditional analysis techniques from data analytics 1.0 and uses the data sets introduced in 2.0 to generate meaningful insight on subjects.

Data analytics 3.0 is quite a broad subject, the article this information is sourced from offers more robust insight and a section “ Ten Requirements for Capitalizing on Analytics 3.0” where the author discusses strategy for using analytics 3.0

I find this source is trustworthy because it’s published in the Harvard business review. Which is owned by Harvard; one of the most widely respected universities in the world.

Add a comment

Related posts:

The 10 Commandments of Travel

From an Ozzie perspective. “The 10 Commandments of Travel” is published by Robert Jones.

Handicap

The title is too impolite and may hurt the emotions of people. Trust me,its not something I mean and am not mean. There isn’t anyone who is completely perfect. We are all imperfectly perfect in some…

Are You Searching for Light in a Spiritually Dark World?

On December nights my neighborhood becomes a multicolor light show. As I walk past my neighbors’ homes, many yards are decorated with lighted inflated Santas, Charlie Browns, nativities, and any…