Docker has quickly become an essential platform for application containerization. By empowering developers to rapidly deploy apps and host them in the cloud, Docker has simplified the dev cycle by expediting the process of building scalable, modern applications.

Docker Compose
Docker Compose is a powerful tool for “code-to-cloud” development. It allows developers to define how to retrieve, build and run multiple containers simultaneously, all defined within a single YAML file (docker-compose.yaml). Let’s check out some cases where Compose can simplify app development.

Compose For Local Development
Containers accelerate development by eliminating the need to install and manage dependencies locally. This allows for a “plug and play” approach to the dev cycle — applications can run on any major OS (including cloud hosts), as they come prepackaged with everything they need to run independently. All developers need to install is Docker.

Docker Compose takes the convenience of containers one step further, by streamlining each service’s building and runtime config into a single process. With Compose, it’s as simple as:

- Define how to build your app’s services with a Dockerfile
- Define how to run your app’s services in the docker-compose.yaml
- Build and run your app with docker-compose up

Compose also allows devs to configure mount volumes (basically directories where data persists) and port mappings (to forward local traffic to the containers).

Compose For Automated Testing
Most modern software development uses the trunk-based model, involving small, frequent changes to a codebase and automated post-commit tests.

As microservices become more common, applications consist of more integrations than ever before. This calls for continuous testing with every new commit, which can become time and resource intensive.

Unit-testing with Compose is pretty straightforward, while integration and end-to-end (E2E) testing tend to be more complex. These types of automated testing require a number of services, which often need to be modified in order to replicate a production environment.

Many of the features that make Compose stand out for local development are also useful for automated testing. Compose can quickly and efficiently spin up and configure full-stack environments for automated testing (which your DevOps engineers appreciate). This allows for executing tests in a reliable and repeatable manner.

Compose is also valuable for testing database integrations. Because containers are ephemeral by nature, we can choose to start with a fresh full database each test, with the option to easily seed it with the same data. This eliminates the possibility of remnant or corrupt data causing false positives/negatives.

In most cases, the same Compose file can be used for both local development and remote testing environments. But if there are differences in how the environments run, you can put the few changes you need in a second Compose file and override the general Compose file like so:

docker-compose -f docker-compose.yml -f docker-compose.test.yml up -d

Compose For Cloud Deployments
Compose is a format that is being adopted natively by clouds as a valid format to define your application. In particular, Shipyard supports Compose as a first-class citizen. With just the single Compose file, users get ephemeral environments for all their pull requests, and one-click deploys to more long-lived QA, staging, and production environments!

Conclusion
Docker Compose is an essential tool for container development and deployment. Check out the Compose file reference and guide for a full tour of features. And check out our starter repos at github.com/shipyard for examples of containerizing modern frameworks.

Thanks for reading, and good luck with your deployments!
Holly 28 september 2021, 21:24

Managing Kubernetes costs can be a daunting task, especially now with the prevalence of multi and hybrid-cloud computing environments. Implementing Kubernetes environments correctly can help make this process smoother and easier, especially when it comes to managing your ephemeral environments.

Managing Clusters
One of the best ways to limit a Kubernetes cluster’s costs is understanding how to manage physical clusters and ephemeral environments.

Although it’s considered a best practice to organize a cluster using namespaces</a>, it can incur additional costs when done incorrectly. While namespaces alone will not increase cost, poor namespace usage makes it harder to keep track of where costs are coming from.

Shipyard believes it is best practice to use one cluster exclusively for production and keep any other environments in a separate cluster. This gives your team room to experiment and make mistakes, while ensuring your production environments will not be affected by lower-priority ones. While this sounds like it’d be less cost-effective, isolating two clusters with very different tolerances/profiles will allow for more targeted cost management in each.

What is the best way to manage your Kubernetes clusters? Well, that depends on who you ask. In our opinion, a good starting point is looking at the cluster’s size. In general, the number of microservices running in a cluster is a relatively straightforward way to get an idea of your deployment’s size.

- Small clusters. Deployments with 2-10 microservices only needs a single production
cluster. These small clusters can handle both production and ephemeral workloads, which
allows more cost-effective binning of microservices on its nodes. This will also save you the
monthly fees from running an extra managed Kubernetes cluster.

- Medium clusters. When a deployment has more than 10 microservices, it becomes
more difficult and time-consuming for developers to manage multiple running copies of an
application. In these cases, the most cost-effective option is to have developers manually
shut down or destroy any ephemeral environments when not in use.

- Large clusters. When it comes to enterprise-level clusters, removing ephemeral
environments may not be a good option. First, it’s time-consuming to ad-hoc restore each
environment when needed, particularly since more users in the company will mean more
usage. Companies can keep costs down by implementing auto-scaling and scaling
ephemeral environments to zero when not in use.

Balancing Cluster Costs
Another important aspect to consider is the cost ratio between the production cluster and the other ephemeral environments. Since larger companies tend to have significantly more environments, production costs may directly correlate with a company’s size.

For example, a smaller company might use one cluster for production and another for development. However, a larger organization may require a higher level of isolation between departments, primarily for security reasons. In this case, using a namespace to isolate deployments can help further maximize security, because namespaces are often supported as a scope for rules.

Also, the type of organization also determines the ratio of production vs. other environments. For example, a single-site SaaS startup will often have fewer ephemeral environments than a large multi-department enterprise software company.

Considering these scenarios, how can we optimize the cost ratio between the production environment and other environments? Here are some ideas:

- Perform a scheduled prune of your Kubernetes cluster/environments. Performing
scheduled cluster maintenance can help identify environments which are no longer in use.
These environments should be destroyed where necessary.

- Create ephemeral environments with limited resources. When destroying low-use
environments isn’t an option, we can combine the strategies discussed above (auto-scaling,
scaling to zero) with the allocation of limited resources at the namespace level (via
ResourceQuota).
Holly 28 september 2021, 21:12

image

Would it surprise you if I said that BASIC is still relevant? In addition to being the progenitor of modern home computing, the language is still viable even outside the retro-enthusiast circles. We propose to plunge into a brief overview of the history of its formation, starting from the basics and ending with modern implementations.
xially 12 march 2021, 16:50

Need Java software development company? SCAND has a mature team of full-stack Java developers working on customer projects across the globe. Utilizing the latest Java technology stack, we create cross-platform applications that perfectly work on desktop, web and mobile devices.
ViDey 21 october 2020, 13:43

image

PVS-Studio static analyzer team, which until recently was searching for bugs and potential vulnerabilities only in C, C++, and C# code, has prepared a new version of their tool for Java code as well. Despite the fact that in the Java world there is already a number of static analysis tools, developers believe that their analyzer can be powerful and will be a good competition.
Kate Milovidova 21 june 2018, 14:33

image

I really liked the discussion thread on Quora.com: What is the hardest part about learning to program? All 87 responses I did not read, but liked, I singled out a separate article of 10 items. It's a free retelling of the opinions of many different people. If readers are interested, I will continue.
1. The difference between high standards and their low skills
In the article "No one talks about it to newcomers" tells about the common problem of people engaged in creative or intellectual work. Programming is a complex subject, and usually for it are capable, ambitious and prone to perfectionism people. At the initial stage, they will not work well. Accustomed to a high bar, they will be upset. The inner voice will constantly whisper: "You never will, it's better to leave this matter." At such moments, think that your self-criticism is a sign of your extraordinary nature, and believe that you will overcome this "incompetent period".

As for the extraordinary advantages of programming, here they are:
xially 6 november 2017, 13:32

The Unreal Engine project continues to develop - new code is added, and previously written code is changed. The inevitable consequence of the development in a project? The emergence of new bugs in the code that a programmer wants to identify as early as possible. One of the ways to reduce the number of errors is the use of the static analyzer, 'PVS-Studio'. If you care about code quality, this article is for you.

image

Although, we did it (https://www.unrealengine.com/blog/how-pvs-studio-team-improved-unreal-engines-code) two years ago, since that time we got more work to do regards code editing and improvement. It is always useful and interesting to look at the project code base after a two-year break. There are several reasons for this.

First, we were interested to look at false positives from the analyzer. This work helped us improve our tool as well, which would reduce the number of unnecessary messages. Fighting false positives is a constant task for any developer of code analyzers.

The codebase of Unreal Engine has significantly changed over the two years. Some fragments were added, some were removed, sometimes entire folders disappeared. That's why not all the parts of the code got sufficient attention, which means that there is some work for PVS-Studio.

The fact that the company uses static analysis tools shows the maturity of the project development cycle, and the care given to ensuring the reliability and safety of the code.

We won't be talking about all the errors that we found and fixed, We will highlight only those that deserve attention, to our mind.

Read more - https://www.unrealengine.com/en-US/blog/static-analysis-as-part-of-the-process

P.S. Those who are willing, may take a look at other errors in the pull request on Github. To access the source code, and a specified pull request, you must have access to the Unreal Engine repository on GitHub. To do this, you must have accounts on GitHub and EpicGames, which must be linked on the website unrealengine.com. After that, you need to accept the invitation to join the Epic Games community on GitHub.Instruction (https://www.unrealengine.com/ue4-on-github).
Kate Milovidova 27 june 2017, 12:50

IT conferences and meetings on programming languages see a growing number of speakers talking about static code analysis. Although this field is quite specific, there is still a number of interesting discussions to be found here to help programmers understand the methods, ways of use, and specifics of static code analysis. In this article, we have collected a number of videos on static analysis whose easy style of presentation makes them useful and interesting to a wide audience of both skilled and novice programmers.

What is Static Analysis?
image
Kate Milovidova 26 april 2017, 8:24

image

In this article we'll look at the main features of SonarQube - a platform for continuous analysis and measurement of code quality, and we'll also discuss advantages of the methods for code quality evaluation based on the SonarQube metrics.

SonarQube is an open source platform, designed for continuous analysis and measurement of code quality. SonarQube provides the following capabilities:
Kate Milovidova 16 november 2016, 12:13

image

One of the main problems with C++ is having a huge number of constructions whose behavior is undefined, or is just unexpected for a programmer. We often come across them when using our static analyzer on various projects. But, as we all know, the best thing is to detect errors at the compilation stage. Let's see which techniques in modern C++ help writing not only simple and clear code, but make it safer and more reliable.
What is Modern C++?

The term Modern C++ became very popular after the release of C++11. What does it mean? First of all, Modern C++ is a set of patterns and idioms that are designed to eliminate the downsides of good old "C with classes", that so many C++ programmers are used to, especially if they started programming in C. C++11 looks way more concise and understandable, which is very important.
Kate Milovidova 15 september 2016, 11:44
1 2 3 4