What fascinates me about Kubernetes?

What fascinates me about Kubernetes?

Social Media, for me, is a space full of mysteries 😊 @Ewa navigates here much better, but I also occasionally allow myself some publications. And although I definitely prefer concrete work on the backend to glamour, 😉 you know very well that what is most appreciated is what you see on the front end. As the “founding father” of our organization, it therefore falls to me to speak up from time to time and direct your attention to what I do with my team 😀

The opportunity to do so is perfect, because next month it will be 2 years since we got seriously, as a team, involved in the Kubernetes project.

During this period we have not only carried out some interesting implementations of VMware Tanzu Kubernetes (both “with vSphere” and TKGM versions), but also migrated several sizable applications to these environments. We are also developing our own modules for “Keight’s”😉: yes, that’s because we are creating our own Container Storage Interface (more about it in the future) and maintaining our own K8s platform, which has managed to leave microK8s and switch to Vanila K8s (more about that in the future, too).

And what is so fascinating about it? Oooo… here I would have to go into a decent way and list its features, describe them, justify them, etc. etc. I thought to myself that it would be better if I supported myself with the Team 🙂

I wrote down the following statements during the interviews 1:1, natural, written down as they were spoken, without correction. Read.

Kacper

What fascinates you about K8s?

What fascinates me about K8s? That this project is so big and open source. I like that the code is open and you can look into it. You can see each component with your own eyes, not just in theory. I really like the documentation: extensive explains everything. Kubernetes changes very quickly and you have to stay up to date, but it has super mechanisms with versioning. You can move forward without losing compatibility. I really like the range of possibilities that K8s gives. We can practically automate everything. And at the same time be sure that HA will be provided. This kube-apiserver (and more key components, ed.) is replicated on control plane nodes. Transparency of operators and resource management. This is unified by APIs and objects that are held in etcd. Anyone who is interested in creating a solution based on K8s can create their controllers/operators that allow flexible state management of such a cluster.

What fascinates me is that it (K8s) gives you tools that allow you to interact with the components of the physical nodes and the physical infrastructure. It’s this flexibility that allows you to create environment-specific solutions. I’m thinking of CSI, CNI, Cloud…. just can’t remember what it was called….interface for integration with cloud solutions.

Well, and the whole idea of this technology being based on containerization. And the idea, to create solutions that are stateless and replicated, that are also fault-tolerant. (Thanks to which) we are able to create upgrade canary deployments using K8s mechanisms, which ensure smoothness in the operation of applications and the lowest possible downtime.

Błażej

Are you fascinated by K8s?

Of course I am.

Why?

The answer is one. Because of its complexity. I’m fascinated by the complexity of this whole environment, but also, how it’s being developed all the time. I haven’t had much to do with it yet, but as I’m getting to know it, it’s interesting to see how many management tools there are directly in K8s. How well provisioning is solved, how super the API is used and the fact that we can gather information with it and control what’s going on. It’s very fascinating how it all works together. Also how the resources are assigned, how if something is missing, how they work together so that everything works and doesn’t crash. All the orchestration makes that it all plays together….

The management tools are interesting, and that you can create your own.

Provisioning is interesting, how much you can know about what’s going on, how to control it, what’s happening to it (the application), how there are some corner cases. There is also good feedback, what is worth controlling and what is worth fixing (improving).

This is not a tool that can be learned in a moment. It requires a lot of knowledge, gathered in practice and in theory, to fully understand it. And something that is difficult to understand is certainly interesting 😉

Lukas

Are you fascinated by Kuberentes?

I am fascinated by its capabilities.

Why?

(…long silence…) I like the change in approach to running applications relative to previous patterns…. and that the limitations it imposes allow you to create interchangeable modules. You can write a CSI driver and any application that supports it will work, the same is true for networking, service mesh, it doesn’t have too many components by itself, but adding external components is simple-easy. Even these components can be interchanged which is important, e.g. microK8s uses dqlite instead of etcd. The architecture is very nicely done, it allows for virtually any expansion of capabilities. That is, we don’t just have containers themselves, but also, for example, VM management, K8s operators and objects. Well, and I think it’s very well done, because it was done by Google people, after they did their own soft yet without containers, I think Borg, and the architecture is done in such a way that those things I mentioned before allow extensibility, well, and it’s open source. And it’s well thought out.

And how does Tanzu fit into this?

Tanzu expands K8s from all possible angles. And everything that can be done (in K8s ed.) Tanzu does. Going from the bottom up: storage (CSI driver to consume vsphere disks), networking, the whole Antrea integrated with NSX, which allows for network observables, even between K8s’ clusters, well and loadbalancers, you have management, sort of a level up, It’s awesome, because even the cluster supervisor is also K8s and a thing that creates clusters, it’s an operator. So the supervisor cluster itself is something of a K8s of K8s. Easy to manage, upgrade, destroy, it’s automated. To manually stage it to make it comparable is very hard, because it’s technically complicated. You have to know a lot of components and how to connect them. And it stands on the shoulders of giants, namely stable vSphere.

And how would you relate it to Sarkan?

We have the storage done. We don’t have all these components. We don’t have supervisor cluster, but when we will have it, it will be compatible not only with vSphere. But our solution only requires disks and Ubuntu, and as long as these are maintained, it all works out of the box. To have Tanzu then you must have compatible servers and storage. So for now, if you want to have it mainly for storage, then Sarkan will be simpler and cheaper.

And what is so wow?

MP that we write to Flopsar 😀…but with K8s…. this compatibility, as you have these components, the application will work everywhere, on any distribution, as long as it is not somehow messed up, in the sense of specifically adapted.

Arek

Are you fascinated by Kuberentes?

Of course I am 🙂

And what fascinates you about it?

…starting with containerization and microservices and moving on to K8s…because sort of the whole topic is fascinating. And K8s gives a lot of support for management. It can be divided into different tiers, which fascinates me. Parts: operator, developer, ecosystem, community. From development to just management, scalability, this reliability in accessing systems, from operator’s point of view: monitoring, optimization even in terms of cost, just how we have scalability, it’s hard to estimate, and with K8s we can optimize it. In terms of scalability, as we have an application that is unevenly loaded, K8s allows, where there is an increased load to handle it…this is a very cool feature of K8s…

I don’t know what else…. it also certainly simplifies application deployment. And here with the support of microservices, we are able to divide the team into smaller teams and we can develop them in parallel….automating various activities has a big plus. And it’s cool that we manage (resources ed.) in a declarative way. We define the requirements of the application and K8s provides us with that….don’t know I think I’ve exhausted…

But what is mega awesome, so much that you would say: “how brilliantly figured out!”?

Hmmmm… … … well, I guess, it will be so general …. But the management of these applications. And operators and security mechanisms.

Benny

Are you fascinated by Kuberentes?

Yes.

Why?

Um… Because it’s a cool tool… It makes it possible to… hmmm… it sort of… just works… Provides a lot of cool stuff for both… hmmm… some hobbyist application deployment, as well as for some large enterprise clusters. It lets you not worry about things like loadbalancing and scalability. It takes care of the big stuff for the user. Admittedly, the entry threshold is quite high and you have to learn a lot, but once you grasp the basics, you can easily do deployments. It takes care of rollout and rollbacks, if something doesn’t go right in deployment, K8s undoes it. Updating applications is much more seamless. All in all, too, technology has a way of liking to break down, services and hardware break down. And the K8s’ ability to selfheal allows it to monitor itself, check what’s going on in the app and possibly tries to fix any problems itself. We don’t have to do anything and K8s does it for us…In short, it’s a cool tool simply put.

But is it any awesome?

Well it is awesome…. what to say…. 😀… the thing that blows my mind is that this solution works brilliantly given how complex it is 🙂 …. or the ability to extend its functionality, that we can literally create custom K8s resources and act on them as if they were the default ones…. great thing…. creating native K8s applications that run and communicate with the K8s cluster…. that’s an awesome thing 🙂…

Piotrek

Are you fascinated by Kuberentes?

All in all, I was somehow not very familiar with K8s. Although during my first assignment on Sarkan, the guys let me get acquainted with it and I even told them that I was shocked at how it works. And not having seen it before, I was able to say that it works cool.

Why does it fascinate you?

Hm… …I think it’s cool that we don’t have to personally watch over the containers, the whole applications built from those containers. He does it for us. So that there is no interruption when something breaks, he will stage it again. What else? It’s cool that we can specify the processing power and RAM memory for these containers, and I guess that’s it. It’s nice that such a tool was created and you don’t have to do all this manually.

But what is so really awesome?

Well then on voffice (virtual all-day collaborative work using Teams, because we work remotely – editor’s note)… I said I was shocked, because when I removed the metadata server, and he changed the leader himself, then put up a new server himself…. I didn’t know it worked like that yet, maybe that’s why…. for now, I think so much of what was so awesome…. I guess…

To summarize the opinion of colleagues, it fascinates me because it has:

  • rollback mechanism for applications deployed on it
  • built-in HA and scalability
  • modularity of components and running services
  • “transparency” of complex architecture
  • great documentation
  • ability to flexibly extend, easily add functionality for specific use-cases
  • observability
  • self-healing
  • Personally, “my favorite” is Service Mesh 🙂 .

And what fascinates you about Kubernetes that I haven’t cited here yet?

VMware Summer Partner Day – relationship building

VMware Summer Partner Day – relationship building

VMware Summer Partner Day sprinkled with summer rain.
Success or fiasco?
An outdoor event should take place in the sunshine. The word summer obliges, evoking only warm feelings. The weather is not under our control, but the atmosphere that was created already is, and this one was excellent 😊.
Provocatively, I started with the rain, although it only added to the charm of the event. The most important thing was the emotions with which I left the meeting, because it is we the people who are the most important. We are the ones who have the power to make and create what surrounds us.
I recently got my hands on the book “Relationship Marketing.Relationships, ties, connections….
Have you noticed that we are all connected to each other, and nothing happens in life by chance?
Building relationships, is the key to success and I will, repeat it like a mantra. At INDEVOPS we know this and we don’t have to build it, because, it just happens.
The VMware Summer Partner Day meeting was supposed to look exactly like this 😊.
Beautiful, smiling people, from small-talk to discussions, exchange of positive thoughts, integration on full 😊 You could sense that there was a hunger for the other, all natural.
I have a feeling that on a daily basis we often forget that in business it can also be spontaneous and relaxed.
That’s how I felt, and so it was. The smell of flowers, the warm evening, the bustle. Everything I was looking forward to, I got.
We were beautifully hosted by VMware, and the Tranquil restaurant was not like that at all that evening, and that’s a big plus 😊.
Although here with the CEO we have a different feeling, because Pawel Orzechowski thinks it was peaceful. Well, we’re both right, because individual experiences count and that’s great!!!

Migration of services to the Telegraph Agent

Migration of services to the Telegraph Agent

Participating in the school event “Interesting profession of a parent”, my daughter asked me: “Dad, what do you actually do at your work?”. And something what seems is obvious to me, in the first moment. And I realised that in the first moment I could not explain just like that something what seems obvious to me. It’s not so easy to explain in an understandable way to a ten-year-old girl what architecture design, business automation process, virtualization system or the very meaning of the word “deployment” is. What’s more of that to talk about it in an interesting and understandable way in front of so-called “lodge of mockers” what her peers stand for 😉.

After a moment of reflection and analysis of the recent issues I faced, I told her how in the 21st century, we save something that, in my opinion, is the most precious – time. How to reduce the amount of “gray hair” on her colleagues’ parents’ heads by eliminating repetitive activities from their lives 😊

So briefly today by using the example of the INDEVOPS team’s involvement in a relatively simple issue. Let’s take a look at the major monitoring system for infrastructure, services, applications based on the vRealize Aria Operations solution. It has been expanded over the years and now it uses more than 4,000 EP Ops agent instances. Since the 8.4 version supporting only the telelgraph agent, the long-awaited day had come when those several thousand EP Ops objects had to be “upgraded” to Telegraph objects. Unfortunately, with no upgrading access, this meant not only redefining, but also reconfiguring several thousand new objects.

If anyone of you have manually configured Telegraph objects, then you are well aware of the Sisyphean task you need to face. I’ve already seen the enthusiasm of administrators in clicking on the GUI all the objects one by one. We’ve already seen the satisfaction on the faces of executives of quickly executed reconfigurations and maintaining continuity in monitoring. If it’s only a matter of clicking a few up to several objects, once in a while, using the standard interface system can be fun.

However, in a situation when we have to recreate 4000 thousand objects in a short period of time, reconfigure more objects, introduce naming conventions, we are no longer enthusiastic to the manual execution of the constantly repeating activities. What’s worse, human mistake can always occur, which can prolong the process and make it more annoying. And with this number it means about several months of work.

Therefore, we were given a task to complete: 🙂 Please recreate in the Telegraph agent all HTTP/TCP/ICMP checks, processes and EP Ops services, maintaining the accepted naming convention and continuity of monitoring. Deadline – one month.

We got down to work, as our paramount goal is timeliness, and thus customer satisfaction.

In the first step, an analysis and inventory of all EP ops facilities was performed.

Among other things, we used vrops’ reporting module, which helped us exclude objects that no longer exist or generate errors.

Then our development team, for whom nothing is impossible, developed a configuration tool:

  • HTTP/TCP/ICMP checks
  • Linux process
  • Windows services

It should be mentioned that the solution prepared by us is able to retrieve the current object configuration from the vROPS instance and adapt it to the Telegraf agent configuration template. The administrator, before the final launch, can verify if the data is correctly entered, and if the systems on which the objects will be configured have the Telegraf agent installed.

Each run ends with a summary report listing the objects on which the configuration failed.

Using the tool, the migration process of more than 7,000 objects was completed within 5 days.

Naming standardization has been simplified. And with the simultaneous use of logical grouping rules in the system, objects are automatically assigned to the correct application, environment (production, test), custodian.

Currently, administrators in their daily work actively use the tool when adding more applications to monitoring. Including Telegraf to the monitoring of each successive application reduces the execution time by 50%.

Of course, at my daughter’s school I did not talk about the above example. My speech was a presentation on the use of technology, modern solutions that make our daily life easier and eliminate activities that simply make us people bored by repeating them all the time.

Surprisingly the “lodge of mockers” received my speech with great interest. In the same manner they had many interesting ideas and solutions for eliminating from their lives the everyday activities of going to school and learning.

Automated cost allocation in IT environments

Automated cost allocation in IT environments

Access to data on fees for the use of IT systems allows you to answer the question “How much does it cost?”. But with the help of a module, where you can find reports and billing statements, you will also be able to make key decisions about your IT environment. The module will prove equally invaluable when looking for savings, planning a budget or when you want to compare offers from different suppliers.

Why is it worth accounting for costs in IT environments?

The answer to this question is simple. Each participant in the process wants to know how much and for what exactly they are paying. It can be assumed with a high degree of certainty that the question “How much will it cost me?” is going to be raised at some stage of decision making (e.g. budget planning for a new system). It is important to determine at the outset whether migration to a new environment will generate savings for the business.

Cost allocation in IT environments can be considered from two points of view: from the IT infrastructure owner’s standpoint and from the perspective of the end user or the ordering party.

Benefits to owners or suppliers of infrastructure

What will the owner or supplier of IT infrastructure gain thanks to the implementation of the VMware vRealize Operations Manager (vROPS) module and a payment policy in VMware vRealize Automation (vRA)?

The most important benefits include:

  • Defining a pricing policy flexibly – it can be individually negotiated, while also taking into account the type of application, environment (production, test, development), as well as location.
  • Options to verify the current pricing policy based on financial performance.
  • Access to reports which allow you to identify entities generating the highest costs.
  • If the contractor has access to official price lists of other cloud providers, it is possible to perform comparative simulations for a single application or a given client.
  • An option to generate reports and statements of charges which can be attached to end-of-month invoices.
  • Fees for using a virtual machine are automatically determined based on the costs associated with the cloud infrastructure (e.g. for internal entities that use the same infrastructure).

Benefits to end users or purchasing entities

How can a company ordering or using a cloud environment take advantage of access to cost information?

End users can:

  • make informed decisions on whether to continue using or to opt out of services based on quantified cost and fees data, such as reports and detailed views;
  • check accounts on an ongoing basis in any time interval, e.g. daily, monthly or yearly;
  • compare charges over a given billing period, e.g. annually;
  • estimate its daily or monthly cost when planning a new system;
  • easily plan the budget for the coming months;
  • analyse the fees for cloud environment and compare them with the official price lists of other providers;
  • look for savings on the CPU/Memory/Storage level if it turns out that the current architecture of the environment is overestimated in relation to the actual needs;
  •  easily control expenses at the level of a particular, internal entity or application thanks to constant access to statements of current charges.

How do we do it at INDEVOPS?

Defining a pricing policy

Formulating a pricing policy underlines our operations within vRealize Automation. This is the very policy that enables us to settle entities precisely. It includes rates for CPU/Memory/Storage and any additional services (such as the use of licenses or IT support).

In the case of complex systems with very large budgets, an approval of senior management is often required on the part of the client. In such situations, we define a pricing policy which subjects the deployment of a new system to its price.

Cost dashboards for infrastructure owners

We utilise vRealize Operations Manager as a cost, reporting and billing module. All expenses that the owner of cloud infrastructure incurs in connection with its maintenance are entered into the cost module. These may include licence purchases, charges for electricity used e.g. for cooling servers, insurance, IT support services, purchasing software and additional applications, creating backups, renting space in the server room, etc.

The reports we prepare contain a summary of costs per single virtual machine, host, cluster, data center or location. Service providers can quickly see what the profit-to-cost ratio is, which allows them to measure and analyse business profitability and make informed decisions about further investments.

Billing information for final users

Final customers using the vROPS module have access to statements where they can check the current amount of fees on an ongoing basis. In this module, they can also view the number of systems currently running, along with the costs generated by them.

Billing statements are also very helpful when systems have been overestimated at the design stage and savings are required. The module gives end customers access to historical data, thanks to which they can compare charges from different settlement periods.

This part of the module also includes official price lists of the largest cloud providers operating on the market, such as Amazon (AWS), Microsoft (Azure), Google (GCP) – this is a valuable source of information for people who want to compare service offers.

Do you already know why allocating costs to virtual machines is one of the most important aspects of IT automation? If you have any questions about it, please get in touch.

We will be happy to help!

Get to know one of the elements to increase security in your company!

Get to know one of the elements to increase security in your company!

How many websites do you use? In how many banks do you have an account?
Do you have the same password everywhere?
Where do you keep your passwords: on a piece of paper by your monitor, in a notebook, in a text file?
Aren’t you afraid of your bank account or a dating site being hacked?

We often hear about data leaks, accounts being hacked or users’ passwords databases being exposed online.

Reports on this industry websites that promote safety:

You probably use at least a dozen or even several dozen websites and you may not even remember how many there are. In most of them you have the same, or slightly different, password set. You haven’t changed it for a long time. Your passwords are uncomplicated, i.e. Pawel@1969 or Blok@da123. Additionally, some of them probably concern your company’s services.
What is it if not asking for problems that may or may not have very serious consequences? Especially in the case of companies or organisations where you have access to applications that contain very sensitive data.

How can you protect yourself against this?

The simplest solution is to use a password manager, which is available free of charge for users.

A good password manager has the following features:

  • encrypts the password database with your password to log in to the site (changing your password re-encrypts the password database, but losing the password makes passwords unrecoverable),
  • downloads the encrypted password database locally to your device,
  • has applications for Windows PC, Linux, MacOS and also for Android and iOS phones,
  • allows you to test your password database for data leaks on websites e.g. https://haveibeenpwned.com/,
  • allows you to test passwords in the database in terms of their strength so that you can correct it,
  • has a strong password generator,
  • has add-ons and applications that automatically fill in login fields for applications and websites without the need to copy and enter them manually

Using Password Manager has only benefits:

  • guarantees that you don’t have to remember many passwords,
  • you only remember one complicated password, which you do not share with anyone,
  • generates passwords that are not easy to guess,
  • does not require entering a password, so peeking at the keyboard while logging in does nothing

We have been using Password Manager ourselves since the beginning of the company, and we make sure that all passwords are generated strong. We keep both our and our clients’ passwords, and periodically verify that they are strong and haven’t been stolen.
Of course, there are other mechanisms for ensuring secure access to accounts, such as using Social Media Logins or advanced Privileged Identity Management solutions, but that’s another topic.

VMware vFORUM, 2018

VMware vFORUM, 2018

vFORUM is a cult event organised by Vmware and Dell EMC in Poland and worldwide.

On October 30, at DoubleTree Hilton in Warsaw, workshops and lectures were held, led by the best specialists on the market in such areas as Network & Security, Digital Transformation and Modern and Agile Data Center. The special guests were, among others Richard Bennett (VMware) Dariusz Piotrowski (DELL EMC) and Roman Polko (GROM).

All vFORUM participants had the opportunity to take part in technical workshops on the subject of digital business transformation, meet industry experts and exchange their experiences with other conference participants!

IINDEVOPS was present at the panel ‘Intelligent mathematical algorithms in the service of monitoring applications, services, and infrastructure’, led by Paweł Orzechowski (CTO, Indevops) and Przemysław Tomaszewski (Systems Engineer, Vmware).

Speakers for over an hour lectured about monitoring, analysing and predicting behaviours and failures in the entire IT environment using the vRealize Operations tools: from hardware to applications, in the Data Center and the cloud, connections and dependencies between objects, and insight into structural and nonstructural data. And even though DEMO didn’t go so well for us, there was no end to discussions and questions.

Round of applause for these gentlemen!

‘Digital transformation is not a fashion that’ll pass. It’s a training plan for business muscles.’

 

#Vmware #vforum #indevops #monitoring #beindevops