Last updated at Thu, 03 Dec 2020 18:49:22 GMT
In this new era of cloud computing, faster and cheaper are not enough. The modern Ops “toolkit” needs a log analytics service built for cloud-based environments that offers easy log data centralization, autonomic analysis, and a real-time monitoring service for connecting distributed systems and teams.
The rapid emergence and dominance of cloud-based systems have contributed to an explosive growth of machine generated log data. We have heard from our community of more than 25,000 IT and Dev Ops users that they need an alternative for managing and analyzing log data – a technology that is cost effective, less painful to maintain, and ultimately one that is smarter about how it analyses huge volumes of data. [
Yesterday, we announced our new Logentries Anomaly Detection and Inactivity Alerting capabilities. In short, we are now providing the ability to automatically use your log data to monitor and track abnormal system and application events AND automatically get notified when important events do not occur or change dramatically. And while this is a natural next step for us, and an important product capability for our users, we see this as a milestone in our much bigger mission to deliver the World’s smartest and most accessible log analytics service.
Logentries was architected and built for the cloud. And as any cloud service should be, we’re making it easily accessible to individuals anywhere in the world, regardless of their technical resources or organizational size.
Log management technologies need to move beyond the basic reactive capabilities of search, storage and the complex query language – to a proactive, real-time model that is self-learning, self-configuring and self-optimizing.
We call this Autonomic Analytics for IT and Dev Ops.
Autonomic Analytics is about providing a smarter way to manage and learn from the massive amounts of machine data produced on a daily basis – technologies that can self-learn and self-optimize such that users can begin to investigate the data without the need for a data scientist or engineer.
In the early 2000s, the idea of autonomic computing grew out of the need to find a new way to manage modern IT systems that were becoming more powerful and also much more complex. Researchers at IBM drew up the autonomic computing manifesto that proclaimed that dealing with complexity was the single most important challenge for the IT industry. Autonomic computing was proposed as a way to deal with such complexities and was modeled on the Autonomic nervous system of the human body.
The idea is to embed the complexity in the system infrastructure itself and then to automate its management. The analogy is drawn with the human body: as you go about your daily business your body manages and monitors a lot of complex tasks without the need for you to even think about them. For example, it regulates your heartbeat, controls your body temperature and blood sugar levels; it controls your pupils so that just the right amount of light reaches your eyes; digests your food… As every system in your body is self-managed, monitored, and restored, autonomic computing can control the “heart beat” of your systems with a similar automated and self-sustained approach to running, monitoring, and learning from itself.
Autonomic computing is probably best summed up in the seminal research paper on the topic by Jeff Kephart and David Chess entitled the “Vision of Autonomic Computing” in 2003:
Autonomic computing, perhaps the most attractive approach to solving this problem, creates systems that can manage themselves when given high-level objectives from administrators. Systems manage themselves according to an administrator’s goals. New components integrate as effortlessly as a new cell establishes itself in the human body. These ideas are not science fiction, but elements of the grand challenge to create self-managing computing systems.
The paper actually became the most widely cited article in the field of autonomic computing, and one of the most widely cited computer science articles of that year.
As I look at the problem facing those analyzing and managing log data today, I can’t help looking at how autonomic computing research has been applied to better manage complex distributed systems. Rather than manually trying to analyze and understand your data by having engineers or data scientists constructing and running queries etc. We know there is a better approach where logs can enable your systems to self-learn, self configure and self-optimize based on high- level objectives and business goals. This eliminates the traditional pain and cost of log management because the inherent complexity is hidden by the log management infrastructure itself.
At a basic level the ability for your logging service to identify anomalies or to learn baseline and new patterns from your data is incredibly valuable.
Here at Logentries we’re embracing the age-old notion that it isn’t about working harder; it’s about working smarter, as should your log management and analytics service.