NPR has a very long and detailed write up about the youTube shooter yesterday which you should read.
The short version of the shooting at YouTube headquarters yesterday was a woman was angry that her workout videos had been age restricted and pretty much all of her content de-monetized.
As a totally sensible response she decided to get a gun and start shooting at random people at YouTube headquarters leaving one man in critical condition and two women in serious and fair conditions. She then turned the gun on herself and ended it.
All the videos with her in them seem to have been yanked. On her page she complains that 366,000 views in a month (5,127 hours of video,) earned her $0.10. To contrast, low watched non targeted content like I produce, 3200 hours of viewing time I got about $112 over the course of a year.
So from what I can guess considering the watch time her income from YouTube videos probably went from a several hundred a month to ten cents.
As of writing all of her YouTube channels display a message saying the account has been terminated due to multiple or severe violations of YouTube’s policy against spam, deceptive practices, and misleading content or other Terms of Service violations.
A few days back she had been reported as missing by her family and was found sleeping in her car by the police. After answering some questions they decided she wasn’t in danger.
Her homepage included links to her Farsi, Turkish, Hand art, and English channels as well as Persian vegan music videos, the dangers of anal sex, animals being skinned alive or boiled as people laugh, veganism, and several other YouTuber videos complaining about de-monetization that probably want no association with her.
This should serve as a reminder whenever your kids (or you,) are watching YouTube, that sometimes these are the people that are speaking to them. She’s dead and may have killed a man for a YouTube revenue stream.
Powered by WPeMatico
I take a lot of photos and videos for Pocketables no matter which way you want to look at it. By frames, hours or gigabytes each review produces quite a few photos and in some cases video. To that end the equipment I use has been constantly evolving to meet the challenge of bringing you our readers the best that we can. With that in mind I took the opportunity recently to upgrade one core component of that, the humble phone tripod adapter I’ve been using since day 0. The Ulanzi ST-03 caught my eye as everything I could want with bonuses like a cold shoe, so lets get a look at it.
The Ulanzi ST-03
Taking a look at the product it’s obvious we’re not dealing with another race to the bottom plastic spring adapter. The packaging shows off the ST-03 in addition to displaying some of it’s potential. Opening it up the adapter is protected by a small amount of foam. Ulanzi claims the ST-03 is CNC machined and it appears very well made overall. Folding makes this very convenient to keep on hand or in a smaller case than would fit my gear otherwise.
Fit, Finish and use
Although initially a bit stiff out of the package the ST-03 was otherwise fine out of the package. The folding design allows it to easily fit in a pocket without any fuss and the arms are secure when unfolded. The screw tightening mechanism although seemingly “less convenient” than auto tension designs holds the phone far more securely. Where before my phone my slide or shift occasionally inside the holder I’m now confident that I could pickup and shake the tripod with the phone in it(I tried it didn’t go anywhere). The red coating appears quite resilient and hasn’t faded with the first few weeks unlike some I’ve seen. Size wise it stretches to dimensions that should accommodate even the largest phones coming just shy of holding one of my older TW802 tablets(an 8″ windows tablet with huge bezels).
Cold Shoe and Arca
Beyond holding a phone to a tripod or monopod the ST-03 is shown with a variety of accessories attached. These are thanks to it’s integrated cold shoe. I don’t have any cold shoe accessories at the moment(I had nothing to attach them to) but I did get some time to try a few. Visiting a local production company ST-03 in hand I went through a bevy of microphone and other accessory mounts seeing which ones did and didn’t fit the shoe. What I learned was that dimensionaly everything fits as planned although some of their custom mounts that reached forward had issues with the screw that tightened the adapter. As far as the Arca mount is concerned it matched up with arca style gear they had on hand although most of theirs was packed so I didn’t get to try any.
Ulanzi really hit the nail on the head with the design of their ST-03 tripod adapter. Although further design revisions could possibly help with a few of the larger cold shoe accessories. The only real fault I could possibly leverage against it is it’s price. At nearly 20 dollars the ST-03 is on the high end of devices in it’s class. This however comes with excellent crafstmanship, secure holding strength and flexibility beyond what many of the cheaper adapters offer. There is a slightly less flexible(it doesn’t fold) model they have for a few dollars less. In my opinion the space savings is well worth a few dollars up front.
Powered by WPeMatico
We live in the era of the connected experience, where our daily interactions with the world can be digitized, collected, processed, and analyzed to generate valuable insights.
Back in the days of Web 1.0, Google founders figured out smart ways to rank websites by analyzing their connection patterns and using that information to improve the relevance of search results. Google was among the pioneers that created “web scale” architectures to analyze the massive data sets that resulted from “crawling” the web that gave birth to Apache Hadoop, MapReduce, and NoSQL databases. Those were the days when “connected” meant having some web presence, “interactions” were measured in number of clicks, and the analysis happened in batch overnight processes.
Fast forward to the present day and we find ourselves in a world where the number of connected devices is constantly increasing. These devices not only respond to our commands, but are also able to autonomously interact with each other. Each of these interactions generates data that collectively amount to high-volume data streams. Accumulating all this data to process overnight is not an option anymore. First, we want to generate actionable insights as fast as possible, and second, one night might not be long enough to process all the data collected the previous day. At the same time, our expectations as users have also evolved to the point where we demand that applications deliver personalized user experiences in near real time.
To remain competitive in a market that demands real-time responses to these digital pulses, organizations are adopting fast data applications as key assets in their technology portfolio. There are many challenges that need to be addressed to create the right architecture to support the range of fast data applications that your enterprise needs.
Here are five considerations every software architect and developer needs to take into account when setting the architectural foundations for a fast data platform.
1. Determine requirements first
Although this seems the obvious starting point of every software architecture, there are specific considerations to observe when we define the set of requirements for a software platform to support fast data applications.
Data in motion can be tricky to characterize, as there are usually probabilistic factors involved in the generation, transmission, collection, and processing of messages.
These are some of the questions we need answered in order to help us drive the architecture:
General data shape
- How large is each message?
- How many messages per time unit do we expect?
- Do we expect large changes in the frequency of message delivery? Are there peak hours? Are there “Black Friday” events in our business?
- How fast do we need a result?
- Do we need to process each record individually? Or can we process them in small collections (micro-batch)
- How “dirty” is the data? What do we do with “dirty” data? Drop it? Report it? Clean and reprocess it?
- Do I need to preserve ordering? Are there inherent time relationships in the messages that need to be preserved as they travel across the system?
- What message process warranty level do we require? At least once? At most once? Exactly once?
The data shape will dictate capacity planning, tuning of the backbone, and scalability analysis for individual components.
The output expectations will assist in the choice of processing engine while the process tolerance will add restrictions in terms of processing semantics and error handling.
2. Leverage the convergence of fast data and microservices
Fast data applications are, by nature, focused on a single task. They have a clear input and output definition, and often a schema as well. Wait. Are we describing fast data applications or microservices? There is a blurred line dividing the two, and data processing libraries such as Akka Streams and Kafka Streams make that line blur even more, as we can use these libraries to embed data processing capabilities in our microservices.
We can think of combinations of data-processing applications with microservices to deliver specific features and insights from a data stream. For example, we can combine a machine learning job for anomaly detection with a dashboard that summarizes the findings to facilitate further investigation.
From a project perspective, creating small, self-contained, data-driven applications that meld streaming data and microservices together is a good practice to break down large problems and projects into approachable chunks, reduce risk, and deliver value faster.
3. Get the message across
We discussed how fast data applications and microservices converge on the conceptual and executional levels. Another element they have in common is that they are both consuming and producing messages. A message-oriented implementation requires an efficient messaging backbone that facilitates the exchange of data in a reliable and secure way with the lowest latency possible.
Apache Kafka is currently the leading project in this area. It delivers a publish/subscribe model backed by a distributed log implementation that provides durability, resilience, fault tolerance, and the ability to replay messages by different consumers. The multi-subscriber approach creates the opportunity to reuse a single data stream for multiple consuming applications.
4. Leverage your SQL knowledge
We usually relate SQL to querying tables in relational databases. At first, it might seem odd to issue an SQL query on a stream of data. But what is a table? It’s a collection of records that were added, updated or deleted over time. We can see a table as an consolidated view of a stream of events over time. Likewise, we can create a stream from the observable changes applied to a table, reported as events. As Tyler Akiadu, from Google, explained in his Strata NY 2017 presentation, “Foundations of streaming SQL”: “Streams are the in-motion form of data, both bounded and unbounded.” He goes further to explain how the relational algebra behind SQL can be applied to streams of data when we add time into the algebra in what he calls “time-varying relations.”
In 2016, Apache Spark introduced Structured Streaming, a new streaming engine based on the SparkSQL abstractions and runtime optimizations. In the same year, Apache Flink announced streaming SQL support. More recently, Apache Kafka also introduced the KSQL query engine, adding streaming query capabilities to the popular event back end.
The adoption of fast data technologies is on a steep rise. The low-level streaming implementations of the mentioned engines require specialized knowledge in order to program new applications.
The availability of SQL enables a wider range of professionals to participate in the development of streaming data analytics pipelines, alleviating the skill shortage in the market and helping organizations to repurpose their workforces as they evolve in their fast data adoption.
5. Build on the shoulders of giants
As we mentioned at the beginning, we expect fast data applications to work reliably, continuously, and deliver results almost in near real time. These requirements impose strong scalability and resilience implications.
Developing standalone applications that fulfill those requirements would be prohibitively expensive, as it would require specialized knowledge of distributed systems, operating systems, and networks, requiring large development and testing efforts to cover the complexity that distributed applications present. Instead, we build those applications on data-oriented frameworks, like Apache Spark and Apache Flink, or we resort to libraries that we can embed in our services, such as Kafka Streams and Akka Streams. These data-oriented stacks implement the low-level complexity and take care of the resilience of the application execution. In turn, they offer a high-level abstraction to enable developers to focus on delivering business value.
To run our applications, we require computing system resources like CPU, memory, disk, and network bandwidth to be allocated to the critical data services that power the applications. When we work on a single machine, the operating system takes care of managing the resources allocated to applications. But when we run on a cluster of machines, how can we perform the resource management required by this new generation of distributed data-intensive applications?
Cluster managers, such as Apache Mesos, are an abstraction that runs on top of any computing infrastructure (public/private cloud, VM, bare metal) to provide a single unified resource pool across multiple infrastructures. Mesos achieves that unification by aggregating the infrastructure resources, and then offering resources slices, like x CPUs, y MB RAM, and z GB disk, to applications. Applications are then able to accept or reject those resources based on their own needs. Mesos can provide resources to execute applications and data services such as Apache Kafka, Apache Spark, and HDFS, or container schedulers such as Kubernetes.
Deploying a cluster management solution like Mesosphere DC/OS helps us take advantage of Mesos to deliver a complete fast-data platform by adding the deployment of standard components, providing a runtime for applications and delivering foundational services such as security and user management. It enables unbounded scalability as more commodity or specialized hardware can be seamlessly added to existing clusters.
This results in increased enterprise agility as resources can be dynamically redirected to support the varying demands of different applications.
Fast data applications are becoming a key asset for enterprises to adopt as they develop competitive advantages in a world where actionable insights need to be produced and consumed in real time.
Building fast data architectures that deliver scalable and resilient real-time applications is a challenging undertaking. The five recommendations that we have collected in this post should help you in your journey from requirements capture to cluster-wide deployment.
A successful implementation of the fast data architecture will give your business the ability to accelerate its data-driven innovation by creating an environment to dynamically create, deploy, and operate end-to-end data-intensive applications. In turn, you will gain increased competitive advantage and the agility to react to your specific market challenges.
This post is a collaboration between O’Reilly and Mesosphere. See our statement of editorial independence.
Powered by WPeMatico
Imagine feeling like you ran a marathon when you’re actually just getting off the couch. Imagine the extreme anxiety you might experience from living with bouts of dizziness, chest pain, and accelerated heartbeat until a doctor explains to you that these symptoms are not “nothing”, and that in fact, you have cardiomyopathy. This condition could lead to heart failure and eventually a heart transplant, but this desperately needed organ may not be available in time.
Organ transplants are in high demand in the United States. The heart is the third most requested organ, with 4,000 candidates on the waitlist and over 2,000 heart transplant surgeries performed in 2017.
Heart transplantation is an exorbitantly expensive procedure (Table 1-1) with a myriad of potential complications. Rejection is an unsolved problem and has remained for all these years a major risk of all organ transplants.
If a heart substitute could be developed, capable of emulating a human heart, this would address a critical need for patients in these dire straits.
Moreover, a lab-grown heart might be free of rejection issues if a patient’s own cells are used, thus economizing the total cost of heart transplant from $1,382,400 to $1,245,800 by eliminating procurement, the surgery to remove the organ from the donor, and immuno-suppressants (Table 1-1).
This $136,600 per transplant saving translates to approximately $300 million from the 2,000 surgeries performed in 2017. Given the 4,000 patients on the waitlist, a rough estimate of the market size of a human heart substitute could reach nearly $1 billion per year for the custom heart industry in the US.
Scientists have been working to address the gap in the shortage of tissue transplants. New opportunities have opened in the past decades as many of the technological limitations have been overcome by important innovations in stem cell biology.
Stem cells are highly pluripotent, with the ability to differentiate into any cell type in the body. Breakthroughs in induced pluripotent stem cells (iPSC) allow for reprograming somatic cells and have made it possible to obtain pluripotent, embryonic-like stem cells without embryos. Cells derived from iPSCs can serve as building blocks for tissues, turning the idea of growing organs outside of the body from science fiction to reality.
While we are able to grow pieces of heart muscle iPSCs to patch small, damaged areas (like Nenad Bursac’s group at Duke University did)—an achievement in itself—there are many obstacles to overcome when building a replica of the human heart due to its complex structure and composition.
Nevertheless, building something that is functionally identical to a heart may be possible. Many interesting approaches are currently being explored to produce adequate, artificial replacements for this most critical organ.
Recycling an Unusable Heart
Time is critical in organ transplantation. When a donor heart becomes available, it can only be preserved for a short time for transplantation. As a result, 20% of donated hearts go to waste because they cannot be transplanted to a recipient in time.
What can we do with these “unusable” hearts? Harald Ott’s group at Harvard Medical School and Massachusetts General Hospital found a way to give them new life.
Using a detergent solution, the researchers first strip the cells, DNA and lipids from the heart, leaving an intact, complex acellular cardiac extracellular matrix (ECM) scaffold.
This cell-free scaffold is a mesh that holds the heart together and contains blood vessels that can transport oxygen, nutrients, and waste. Next, they re-seed the ECM with new cardiomyocytes derived from iPSCs to form the muscular wall of the heart.
The re-cellularized ECM is incubated in a bioreactor which could provide all nutrients and mechanical stimulations needed for tissue development. After two weeks, the ECM is covered with layers of cardiomyocytes and exhibits functional contraction upon electrical stimulation.
Several challenges remain to be addressed in this approach, including ensuring that the right number of cells are in the right combination for proper heart function, as well as engineering a bioreactor that better mimics the human body. Despite these obstacles, this approach has served as a convincing proof-of-concept that functional hearts can be re-grown in labs on existing cardiac ECMs
Building a Vegetarian Heart
As an alternative to using ECMs from animal sources, a multi-institutional collaboration among Worcester Polytechnic Institute, the University of Wisconsin-Madison, and Arkansas State University-Jonesboro seeks to develop plant-based ECM scaffolds.
This idea is far from intuitive, given the vast differences between animals and plants. However, plant vascular structures follow many of the similar physiological laws as the cardiovascular systems of animals. More importantly, an ECM from plants is mainly composed of biocompatible materials, making it an ideal candidate for a lab-grown organ.
In a procedure analogous to the one just described, scientists first wash away the cellular material from a spinach leaf to obtain an acellular ECM with functional veins. The spinach ECM is then coated with human endothelial cells on the leaf vasculature and seeded with human cardiomyocytes.
After a few days, the cells attach to the spinach ECM and contract in the same way as cells grown in a tissue culture. This demonstrates the possibility of culturing human cells on a plant scaffold. However, how well plant veins can sustain human tissue and the immune response against the plant scaffold still requires further investigation.
Exploiting a Pig’s Heart
While the heart is a delicate and complicated machine, it is hardly unique to humans. Could we substitute human heart with a heart from an animal? This process is known as xenotransplantation.
In the mid-20th century, severe immune responses made all xenotransplantation between nonhuman primates and humans fatal. However, in recent years, with better understanding of the human immune system, xenotransplantation has gradually regained attention and consideration.
Among all of the possible animal donors, the pig is considered to be the best candidate, as it is widely accessible, genetically similar to humans, and has organs of approximately the same size.
The creation of a genetically engineered pig lacking the gal gene, a main trigger of human immune reaction, has significantly extended the survival time of baboons receiving these porcine organ transplants.
Muhammad Mohiuddin’s group at the National Heart, Lung, and Blood Institute, took a step further with the gal knockout pigs. By inserting two human proteins that prevent host cell damage and blood coagulation into the gal-free pig genome, his group was able to keep a porcine heart alive in a baboon for over two years.
Delayed rejection ultimately happened and led to the death of the baboon, but modifying pig genes, altering the types and expression levels of human genes in pigs, and optimizing anti-rejection drugs may lead to a viable system of xenotransplantation.
In addition to rejection complications, viral infection is another concern for xenotransplantation. Genes from ancient infections from porcine endogenous retroviruses (PERV) are scattered throughout a pig’s genome.
Although it is not clear whether PERVs could actually produce viral particles capable of infecting humans, the presence of those viral remnants still generates concerns among the scientific community and casts a shadow on the use of xenotransplantation.
To overcome this problem, eGenesis, a Boston-based startup, has devoted its expertise in genome editing to make PERV-free pigs. Founded in 2015 by geneticists Luhan Yang and George Church from Harvard Medical School, the company’s mission is to use genome editing to make safe human tissues and organs for transplant.
Still at an early stage, the company announced in March 2017 a $38 million series A financing led by Biomatics Capital and ARCH Venture Partners. Their recent study using CRISPR-Cas9 facilitated multiplex genome editing and successfully removed all PERV genes from the pig genome, producing the first batch of PERV-free piglets. This groundbreaking progress is an important step toward eliminating potential viral infections and to making xenotransplantation safer.
Crafting a Chimera Heart
If a porcine heart for transplant is not readily available, perhaps growing a human heart inside a pig (known as a chimera) may be a more straightforward approach.
Along those lines, Juan Carlos Izpisua Belmonte’s group from the Salk Institute has successfully developed a rat-mouse chimera with a rat-cell-enriched pancreas. They have done this by injecting rat pluripotent cells into a mouse embryo lacking a critical gene for pancreas development.
This success led researchers to conduct a pioneering experiment in which they injected human pluripotent cells into pig blastocysts which were grown into viable human-pig chimeric embryos. However, chimeric embryos exhibited low human cell content and slow growth rates, due to the significant evolutionary distance between humans and pigs.
Nevertheless, this was the first successful example of introducing human stem cells into large animals and was a significant step toward growing transplantable human tissues/organs in host animals.
Human-animal chimera research is a heavily debated issue, with the public expressing concerns that such research could blur the border between humans and other species.
The National Institutes of Health (NIH) also prohibit several approaches in creating human-animal chimeras, but they are now considering revising their policy in order to better guide the rapid progress of the field.
Resuming 3D Printing
Earlier 3D bioprinting focused on generating molds for cells to attach. Although this approach cannot produce solid organs, the possibility of “printing” an organ remains an attractive direction for researchers to pursue.
Creating a scaffold with a vascular system is one of the biggest challenges in tissue printing. Anthony Atala’s group from the Wake Forest Institute for Regenerative Medicine recently reported a 3D printing system that can print layers of cells with biodegradable plastic micro-channels that mimic vascular systems.
Prellis Biologics, a San Francisco-based startup, is working on printing blood vessels based on real human vascular data. Its proprietary 3D laser-based printing technique adapts information from a microscope, with the projected laser polymerizing bio-substrates at very high resolution. In combination with computer-aided design, it’s able to build a precise scaffold system to support the cells and tissues.
The company recently received a $1.8 million seed investment and a total of $1.92 million led by True Ventures. The company’s early goal is to print a human organoid, a miniature organ that mimics the functions of a real organ. It has since successfully created human lymph organoids that can generate human antibodies.
Organovo, another 3D printing pioneer, went one step further in scaffold-free bioprinting technology and is now one of the most established companies in the field. Its idea, different from all other related companies, relies on the native programing of cells.
Organovo uses multicellular aggregates as bio-ink. When cells are architecturally positioned and stabilized by bio-inert hydrogels, the presence of specific cell types and the cell-cell contact can generate a local environment that mimics in vivo conditions to achieve enhanced tissue-specific functions.
The liver organoids it produces reach a few millimeters thick and can survive in vitro for about one month.
These 3D-printed organoids are used as animal model substitutes for drug tests, which could significantly reduce the failure rate for clinical trials. The goal of the company is to make liver tissue patches the size of an iPhone for transplant in the next 10 years.
Because 3D bioprinting is the most active direction in the custom organ industry, other companies are making shifts from their original courses.
United Therapeutics, a biotech that mainly developed medicines for pulmonary arterial hypertension, announced its collaboration with 3D Systems in early 2017 to launch a multiyear project on printing solid organ scaffolds. The company also released a longer-term pipeline for developing several transplantable organs, including heart and lung.
Putting Hearts on a Chip
Aside from building a full-size heart, the cardiac microphysiological system (MPS), or “heart-on-a-chip,” is another direction in the field. This consists of a cell chamber with three-dimensional cardiac tissue that has microcirculation channels to mimic the nutrient/oxygen transport in a real heart. Like the organoids, heart-on-a-chip is a safer and more effective way to assess the effect of different drugs.
Kevin Healy’s group at UC Berkeley developed a heart-on-a-chip using human iPSC-derived cardiac muscle tissue. After loading the cardiomyocytes to the chip at high packing density and low pressure, the cells form 3D cardiac tissue within 24 hours and start to beat in a uniaxial manner after about a week. Pharmacological data derived from these MPS are more physiologically relevant than those from 2D cell-based studies.
Jennifer Lewis and Kevin K. Parker from Wyss Institute, Harvard University also reported the first 3D printed heart-on-a-chip with soft strain sensors. The soft strain sensors have micro-grooves on the surface that can guide cardiac cells to self-assemble into physio-mimetic cardiac tissues.
The contraction of those cardiac tissues will cause bending of the soft sensor, thus inducing a change in the resistance which could be translated into detectable signals.
Tara Biosystems, a heart-on-a-chip startup in New York City, recently closed on its $9 million series A financing co-led by Trancos Ventures and Morgan Noble, to launch its first products in 2018 and accelerate the development of cardiac tissues and disease models.
Despite the increasing interests in custom organ research and the immense investment in R&D, it may still take decades before transplantable lab-grown organs are generated.
The U.S. Food and Drug Administration (FDA) regulations for approvals will also certainly play an important role in determining when the first transplantable organ can be used in patients.
Looking forward, there are many potential directions, such as lowering the cost and risks of clinical trials and developing personalized drug screens and therapies. As the frontiers of science inexorably expand, obstacles to these technologies are giving way, and this holy grail of medicine, custom organs, is slowly becoming attainable.
Powered by WPeMatico
Develop and refine your skills with 100+ new live online trainings we opened up for April and May on our learning platform.
Space is limited and these trainings often fill up.
Getting Started with Amazon Web Services (AWS), April 19-20
Python Data Handling: A Deeper Dive, April 20
Getting Started with Go, April 24-25
Getting Started with Vue.js, April 30
Building a Cloud Roadmap, May 1
Git Fundamentals, May 1-2
IPv4 Subnetting, May 2-3
SQL Fundamentals for Data, May 2-3
Managing Team Conflict, May 3
Cyber Security Fundamentals, May 3-4
Introducing Blockchain, May 7
Get Started with NLP, May 7
Building Deployment Pipelines with Jenkins 2, May 7 and 9
Introduction to Apache Spark 2.x, May 7-9
Deep Learning Fundamentals, May 8
Acing the CCNA Exam, May 8
Design Patterns Boot Camp, May 8-9
Introduction to Lean, May 9
Cloud Native Architecture Patterns, May 9-10
Deep Reinforcement Learning, May 10
Scalable Web Development with Angular, May 10-11
Bash Shell Scripting in 3 Hours, May 14
Learn Linux in 3 Hours, May 14
Product Management in Practice, May 14-15
IoT Fundamentals, May 14-15
Architecture Without an End State, May 16-17
Agile for Everybody, May 17
Practical Data Cleaning with Python, May 17-18
Troubleshooting Agile, May 18
Managing your Manager, May 18
Building Chatbots with AWS, May 18
Your First 30 Days as a Manager, May 21
From Developer to Software Architect, May 22-23
CISSP Crash Course, May 22-23
Introduction to Kubernetes, May 22
CCNP R/S ROUTE (300-101) Crash Course, May 22-24
Docker: Beyond the Basics (CI & CD), May 23-24
Introduction to TensorFlow, May 23-24
Cyber Security Defense, May 24
The DevOps Toolkit, May 24-25
Kubernetes in 3 Hours, May 25
Ansible in 3 Hours, May 25
CCNA Security Crash Course, May 29-30
Scala: Beyond the Basics, May 29-30
Microservices Architecture and Design, May 29-30
Docker: Up and Running, May 29-30
PMP Crash Course, May 31-June 1
Test Driven Development in Java, May 31-June 1
Architecture Without an End State, May 31-June 1
Visit our learning platform for more information on these and other live online trainings.
Powered by WPeMatico