Adam Brodziak – Blog – Future Processing https://www.future-processing.com/blog Tue, 03 Mar 2026 10:19:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://www.future-processing.com/blog/wp-content/uploads/2020/02/cropped-cropped-fp-sygnet-nobg-32x32.png Adam Brodziak – Blog – Future Processing https://www.future-processing.com/blog 32 32 Optimise processes with technology in the media industry: how much can you benefit? https://www.future-processing.com/blog/optimise-processes-in-media-industry/ https://www.future-processing.com/blog/optimise-processes-in-media-industry/#respond Thu, 11 Dec 2025 13:34:07 +0000 https://stage2-fp.webenv.pl/blog/?p=35188
Home Blog Optimise processes with technology in the media industry: how much can you benefit?
IT News

Optimise processes with technology in the media industry: how much can you benefit?

Technology can transform the way media organisations work, turning complex workflows into seamless, efficient processes. The real question is however how much you could gain when every step, decision, and output is optimised. Let's find out.
Share on:

Table of contents

Share on:

Why is process optimisation important for media organisations aiming for an enhanced customer journey?

Media companies today are under intense pressure. Shrinking ad markets, slowing subscription growth, and growing audience fragmentation on different marketing channels mean that inefficiencies are no longer a tolerable luxury.

A significant share of media business’s cost base is tied up in content production. A report by KPMG notes that annual content spend by top players now exceeds $200 billion, growing at 10% compound annual growth rate since 2020. Yet reports show that about 80% of media content drives roughly 20% of target audience engagement, highlighting the urgent need for streamlined operations.

Against this backdrop, outdated, fragmented workflows – especially for production, distribution, and rights management – slow output, increase overheads, and leave value on the table. Rising expectations for real-time publishing, personalised content, and cross-platform delivery mean that processes built a decade ago can no longer keep pace – they are outdated, and have huge impact on media performance, slowing it down rather than accelerating it.

This is why optimising processes is so important. Doing it matters so much now because the gap between streamlined, digitally-native operators and traditional players is widening fast. Organisations that fail to modernise risk being squeezed from both sides – by rising media spend and shrinking revenue – while those that reorganise workflows and embrace data and automation sharpen their competitive edge.

Data – whether customer data, sales data or audience data – has become a core competitive advantage, yet many media organisations still struggle to operationalise it efficiently.

£1B+ in bookings for the UK’s largest independent broadcaster with a new ad management platform

What main business challenges can be solved if organisations optimise processes in the media industry?

Media organisations face a complex mix of business challenges that make effective process optimisation more critical than ever.

Traditional revenue models are under pressure, with declining print sales, slowing subscription growth, and fluctuating advertising markets forcing companies to rethink how they generate income.

At the same time, shifting audience behaviours – from on-demand streaming to multi-platform consumption – are driving the need for faster, more flexible content delivery and more effective marketing efforts. Rising costs, rapid technology disruption, platform dependency, and increasing regulatory scrutiny further complicate operations, while intellectual property management and audience trust remain ongoing concerns.

Properly executed optimisation can help media companies address these pressures, creating space to focus on innovation, efficiency, and long-term competitiveness.

Let’s now examine in more detail the three main challenges that can be solved by properly conducted optimisation.

Costs of maintaining current systems

Many media organisations rely on legacy infrastructures – old CMS platforms, ad‑servers, DAM tools, licensing modules – often stitched together with manual connectors. But keeping them alive has become a heavy drain.

A recent study by Profound Logic shows that outdated systems can consume 60–80% of a company’s IT budget, leaving only 20–40% for new development or innovation.

Even broadly, organisations report that technical debt often “eats” 20–40% of the entire technology estate’s value, forcing CIOs to redirect 10–20% of planned product‑oriented budgets merely to keep systems running. Meanwhile, licensing, support, and maintenance costs escalate yearly – yet returns from these aging systems continue to shrink, turning legacy infrastructure into a bottleneck rather than a foundation.

Optimisation reduces these hidden costs by consolidating platforms, automating workflows, or replacing outdated modules with modern, scalable alternatives.

Data source integration (acceleration of operations)

Modern media companies ingest data from dozens of disparate media channels: analytics tools, ad‑tech systems, CRM, content metadata pipelines, social media platforms, syndication partners – and more. Without integration, this creates silos. In many organisations, legacy systems or manual workflows for reconciling data cause major inefficiencies.

According to industry reports, outdated systems and data silos can drastically slow operations, hamper timely decision‑making, and make real‑time monetisation or personalisation nearly impossible.

Optimisation – meaning collecting, interpreting, and using data in a meaningful way via unified data pipelines, automated reporting, and modern integration – becomes the only viable way to turn data into actionable, timely insight rather than a backlog of unread spreadsheets.

Team demotivation and time spent on repetitive tasks

Creative teams in media are routinely bogged down by administrative work – manual tagging, format conversions, rights checks, repetitive publishing steps, and endless reconciliations – tasks that drain time away from reporting, storytelling, and experimentation. The consequences are clear: innovation slows, fewer new formats are tested, and turnover rises. Newsroom and production job losses across 2023–24 have heightened anxiety around workload, automation, and organisational change.

Process optimisation – from AI tagging and auto-formatting to template-driven production for social campaigns and automated publishing pipelines – removes friction and gives teams back the time they need to focus on high-value creative work. This is also where tools like Adacs add tangible value: by automating the planning and buying process traditionally tracked manually in Trello, Excel or email chains, media agencies can eliminate repetitive coordination tasks, reduce errors, and accelerate campaign activation.

WAN-IFRA and industry case studies show that workflow automation delivers faster publishing cycles and measurable productivity gains, helping teams stay motivated and focused on work that actually moves the needle.

What actions can you take, and are they really worth it?

Media organisations facing mounting operational drag have three choices, and only one of them leads anywhere good.

  • Internal changes

They can try to handle the transformation internally – but this often means asking already overstretched teams to rebuild systems they barely have time to maintain. Progress is slow, political resistance is high, and legacy complexity tends to win.

  • External consulting

They can partner with external experts – and this is where Future Processing brings a real advantage with its deep market experience, hands-on knowledge of how media companies actually operate, and a network of recognised technology partners that understand modern content, data, and rights workflows.

This option accelerates change, reduces risk, and brings in capabilities the teams don’t need to reinvent from scratch.

  • No changes (and what this will entail)

Or they can do nothing and accept what follows: rising costs, slower output, talent attrition, deteriorating audience engagement, and competitors who move faster because they chose to modernise.

The choice is straightforward. The cost of inaction grows every quarter, while the benefits of guided optimisation compound. Future Processing’s role is to help media companies move past intention and into measurable results – efficiently, quickly, and with industry-proven methods.

What stage of the optimisation process are you at?

Below we outlined a simple framework that will help you self-diagnose your current maturity:

Stage 1 – Awareness

You recognise inefficiencies, but haven’t quantified their impact. Work feels slow, fragmented or overly manual.

Stage 2 – Initial measurement

You’ve started tracking KPIs, though only for parts of the workflow. You see where delays occur, but not the full root causes.

Stage 3 – Process mapping & prioritisation

Workflows are documented, bottlenecks are visible, and you’ve identified quick wins or begun limited proofs of concept.

Stage 4 – Automation & modernisation

Legacy systems are being replaced or consolidated, data pipelines are integrated, and key tasks are automated.

Stage 5 – Scaling & continuous improvement

Optimised processes are embedded in the organisation. Performance is monitored systematically, and improvements happen continuously.

Use this five-stage framework as a quick self-assessment. You don't need to analyse every stage in depth – what matters is recognising where you currently stand and what the next practical step should be. If you're early in the journey, start by quantifying bottlenecks and documenting workflows. If you're further along, focus on consolidating systems, integrating data pipelines, and scaling automation.

The goal isn't perfection; it's momentum. Even small, well-targeted actions at your current stage can unlock speed, reduce costs and free teams to focus on higher-value work.

What measurable benefits can you achieve through process optimisation?

For media organisations, the impact of process optimisation is not theoretical – it shows up quickly in numbers that matter. Depending on your maturity and the depth of optimisation, companies typically see gains in five areas:

  • Operational efficiency

Faster production cycles, fewer handovers, reduced rework, and smoother cross-platform publishing. Teams ship more content with less friction.

  • Cost savings

Lower spend on legacy tooling, redundant systems, manual processes, and avoidable licensing or maintenance fees. Streamlined workflows reduce overheads without reducing output.

  • Revenue growth

Better use of data, faster experimentation, and clearer visibility of asset performance translate into improved monetisation – from higher ad yield to more effective personalisation and smarter content investments.

  • Workforce satisfaction

By removing repetitive tasks and administrative drag, teams reclaim time for creative, strategic, and audience-facing work. This boosts morale, retention, and pace of innovation.

  • Better executive decision-making

Unified data pipelines and real-time insights enable leaders to allocate budgets, adjust strategy, and respond to market shifts with confidence – not guesswork.

Process optimisation isn't a technical exercise; it's a direct lever for performance, growth, and competitive advantage in an industry where speed and clarity increasingly decide who wins.

Driving revenue and shaping the future of Media with technology

Contact us to see how we can help you transform your media business.

FAQ

How important is data quality, analytics, and data integration for driving optimisation in media operations?

Critical. High-quality data and data consistency enables media organisations to pinpoint bottlenecks, understand real production and distribution costs, forecast content demand, personalise delivery across platforms, and make evidence-based decisions about which workflows to modernise first.

Implementing change data capture ensures that updates across systems are tracked in real time, while a centralised data warehouse allows teams to integrate, store, and analyse this information efficiently. Without these capabilities, optimisation becomes guesswork.

Common pitfalls include underestimating change management, attempting to optimise everything at once, introducing new technology without first redesigning the underlying processes, working without clear KPIs, and measuring technical activity instead of business impact. Technology alone won’t fix broken workflows.

Success typically requires clear process governance, cross-functional collaboration between editorial, production, distribution, commercial and data teams, upskilling staff in tools and automation, aligning incentives around efficiency and speed, and embedding a culture of continuous improvement rather than one-off fixes.

Tangible gains include reduced production and operational costs, faster time-to-market for content, smoother cross-platform publishing, better utilisation of staff and systems and more informed decisions. Optimised workflows also improve monetisation of media investments, enabling more agile ad placements and campaigns, while leveraging diverse data sources to respond to market shifts or audience behaviour in real time. The result: smarter marketing strategies and stronger overall business performance.

Key hurdles include resistance to change, outdated or fragmented legacy systems, poor data integration, inadequate data security, unclear ownership of processes, and difficulty aligning commercial priorities with operational decisions. Overcoming these requires leadership commitment and a structured, phased approach.

Value we delivered

50

monthly cost reduction achieved through proactive implementation of AWS Cloud savings plans

Let’s talk

Contact us and transform your business with our comprehensive services.

]]>
https://www.future-processing.com/blog/optimise-processes-in-media-industry/feed/ 0
Beyond compliance: leveraging the EU Data Act to unlock multicloud agility https://www.future-processing.com/blog/beyond-compliance-eu-data-act-to-unlock-multicloud-agility/ https://www.future-processing.com/blog/beyond-compliance-eu-data-act-to-unlock-multicloud-agility/#respond Tue, 04 Nov 2025 09:31:19 +0000 https://stage2-fp.webenv.pl/blog/?p=34646
Home Blog Beyond compliance: leveraging the EU Data Act to unlock multicloud agility
Cloud

Beyond compliance: leveraging the EU Data Act to unlock multicloud agility

Technology can transform the way media organisations work, turning complex workflows into seamless, efficient processes. The real question is however how much you could gain when every step, decision, and output is optimised. Let's find out.
Share on:

Table of contents

Share on:

From compliance mandate to strategic opportunity

For media and entertainment enterprises operating across regions, the EU Data Act represents a fundamental shift in how cloud infrastructure must be governed.

The Act – applicable from 12 September 2025 – will enforce cloud interoperability and portability, ensuring that customers can transfer workloads and data between cloud providers with minimal friction. By 12 January 2027, switching fees will be prohibited. All temporary fees must reflect only direct costs such as egress or data export.

This situation transforms cloud switching and interoperability from a technical or cost-optimisation task into a regulatory compliance requirement.

For global media organisations, the change also presents a strategic opportunity: the freedom to optimise for performance, pricing, and innovation, without being locked into a single provider.

Cloud switching enablement

With cloud-agnostic architectures and portable data schemas in place, organisations are well-positioned to comply with the EU Data Act while also gaining agility.

To effectively prepare for such a change, technical, operational, and financial teams should work together to build a portability-ready foundation that supports both compliance and agility.

Here is a short checklist of things that should be done to kick start the preparations:

  • Map dependencies and export formats for all media assets, manifests, encryption keys, and telemetry. Understanding exactly what data lives where – and in what format – is the cornerstone of smooth portability.

  • Negotiate exit SLAs and portability clauses in your Master Service Agreements (MSAs) before renewals. These ensure that providers remain accountable for seamless transitions when business or regulatory needs change.

  • Maintain interoperability evidence for audits and compliance reporting. Keeping documentation up to date demonstrates readiness and reduces the risk of last-minute compliance gaps.

  • Plan dual-run and staged cutovers for live or high-traffic events. Running workloads in parallel during migrations helps preserve uptime and viewer experience.

  • Align Cloud Financial Management (CFM) practices to track portability-related costs such as egress, API calls, and data transformations. Visibility into these drivers supports informed decisions on when and how to move workloads efficiently.

By taking these steps one by one, your organisation will be able to create a clear pathway to compliance with the EU Data Act while also strengthening its ability to adapt quickly, whether to optimise cost, shift regions, or embrace new innovation opportunities.

£1B+ in bookings for the UK’s largest independent broadcaster with a new ad management platform

Multi-cloud and multi-CDN synergies

Once storage and lifecycle management are optimised for portability, attention shifts to the delivery layer, where content performance directly impacts viewer experience, ad-fill efficiency, and monetisation.

A multi-CDN strategy, when coupled with intelligent traffic steering, delivers both performance and cost resilience. It aligns seamlessly with the EU Data Act’s interoperability principles, enabling frictionless content delivery across multiple providers and geographies.

Key enablers in this process include:

  • RUM-based routing for performance-driven traffic steering — using real user measurements to select the best-performing CDN in real time.

  • Real-time failover and standardised logging for performance comparison, service continuity, and compliance traceability.

  • Cost-versus-QoE (Quality of Experience) metrics to correlate delivery costs with actual audience satisfaction, ad-fill integrity, and playback quality.

The importance of the EU Data Act and multicloud convergence

The convergence of the EU Data Act compliance with multi-cloud and multi-CDN strategies gives organisations a unique opportunity to future-proof both their infrastructure and their business models.

By embracing portability and intelligent workload distribution, media enterprises can:

Avoid vendor lock-in and regulatory penalties.

Ensuring interoperability across clouds eliminates the risk of being tied to a single provider’s roadmap or pricing, while meeting the EU’s strict portability standards before they become mandatory.

Enable rapid platform migrations and dual-region streaming resilience.

Workloads, assets, and delivery pipelines can be replicated or shifted between providers and regions with minimal downtime — an essential capability for global events, disaster recovery, and audience growth.

Gain negotiating leverage across providers.

The ability to move workloads creates a more balanced commercial relationship. Providers must compete on service quality, innovation, and pricing rather than relying on customer inertia.

Drive predictable cost efficiency without compromising experience quality.

Multicloud visibility and multi-CDN optimisation allow costs to be tied directly to performance outcomes — aligning spend with measurable audience impact, ad-fill integrity, and playback quality.

The EU Data Act provides the regulatory push, but the true value lies in strategic flexibility — building an ecosystem that can evolve as technology, audience behaviour, and business priorities change. For digital media leaders, this is not just about meeting new rules — it’s about shaping a more agile, resilient, and performance-driven future.

Driving revenue and shaping the future of Media with technology

Contact us to see how we can help you transform your media business.

FAQ

What is the EU Data Act and why does it matter for media companies?

The EU Data Act mandates data portability and interoperability between cloud providers to promote competition and prevent vendor lock-in. For media firms that rely on multiple storage, encoding, and delivery platforms, it ensures they can migrate or duplicate workloads freely while maintaining service continuity.

  • 12 September 2025: interoperability and portability requirements take effect.

  • 12 January 2027: switching fees are fully banned; interim fees must reflect only direct costs.

  • Audit all data dependencies and exit paths.

  • Review cloud contracts and renewal terms.

  • Invest in monitoring and cost visibility tools.

  • Pilot workload mobility or dual-run tests between providers.

Not necessarily. Operating in multiple cloud services environments can add complexity that isn’t always justified. What truly matters is a cloud-agnostic architecture – designing systems so that workloads can move or be replicated across providers when needed.

Even if your cloud adoption strategy focuses on a single provider, maintaining credible exit options strengthens your negotiating position and lowers long-term total cost of ownership.

The goal isn’t constant switching; it’s strategic flexibility – ensuring your cloud architecture supports agility, compliance, and the ability to optimise costs across regions or providers when business priorities evolve.

Value we delivered

50

monthly cost reduction achieved through proactive implementation of AWS Cloud savings plans

Let’s talk

Contact us and transform your business with our comprehensive services.

]]>
https://www.future-processing.com/blog/beyond-compliance-eu-data-act-to-unlock-multicloud-agility/feed/ 0
Cloud cost excellence in Media: a CEO/CFO playbook https://www.future-processing.com/blog/cloud-cost-excellence-in-media-ceo-cfo-playbook/ https://www.future-processing.com/blog/cloud-cost-excellence-in-media-ceo-cfo-playbook/#respond Thu, 30 Oct 2025 09:02:00 +0000 https://stage2-fp.webenv.pl/blog/?p=34452
Home Blog Cloud cost excellence in Media: a CEO/CFO playbook
FinOps

Cloud cost excellence in Media: a CEO/CFO playbook

Technology can transform the way media organisations work, turning complex workflows into seamless, efficient processes. The real question is however how much you could gain when every step, decision, and output is optimised. Let's find out.
Share on:

Table of contents

Share on:

Executive summary: why is FinOps a board level priority in the Media industry?

As audience expectations rise and every millisecond of startup time influences engagement, cloud performance and cost efficiency have become inseparable.

With media companies shifting workloads from traditional data centres to public cloud and hybrid cloud environments, understanding cloud economics is now mission-critical.

Our recent cloud financial management assessment for a global, enterprise-size media client - with an average annual cloud spend of about USD 10 million - demonstrated how disciplined financial accountability can transform cloud operations into a genuine competitive advantage.

By analysing 12–24 months of billing and usage data, mapping consumption to business functions, and assessing tagging maturity across cloud environments, we identified both immediate and long-term optimisation opportunities.

Key takeaways of the assessment

  • Thanks to our service, the client saved up to 20% of their yearly cloud spend.
  • In the first year alone, they could achieve savings of 10–20%.
  • These funds can either be kept as savings or reallocated, for example, to investments in data and AI.

In general, possible optimisations of such an audit could include:

  • Removal of wasteful resources, saving 23.4% of overall AWS spend.
  • Rightsizing of resources, reducing compute costs on AWS by 33%.
  • Upgrading resources, achieving 12.4% cost savings and improved efficiency.
  • Scheduling of non-production workloads, cutting 63% of computing costs in non-production environments.
  • Storage clean-up and usage optimisation, saving 64.5% in development/test environments and 41.3% in production.
  • Commitment discounts, reducing overall AWS costs by 54%.

£1B+ in bookings for the UK’s largest independent broadcaster with a new ad management platform

From cost cutting to cost-to-value alignment

Cloud financial management in media is not about slashing budgets – it’s about aligning spend with business value. Every dollar should directly support quality of experience (QoE), service reliability, and roadmap velocity.

Advertising yield, subscriber growth, and live-stream concurrency all depend on QoE metrics such as startup time and rebuffer ratio. Through data-driven FinOps, our client achieved tangible savings without compromising performance, turning cloud efficiency into a strategic lever for growth.

Metrics analysed during a FinOps assessment

Broader industry changes further highlight the need for effective cloud financial management. Vendor service retirements are forcing organisations to re-evaluate platform dependencies, while multi-region expansion strategies are increasing both cost complexity and governance demands.

Key takeaway:

FinOps ≠ “cost cutting.”

It’s cost-to-value alignment with guardrails that protect QoE, business agility, and roadmap velocity.

Learn more about Cloud Cost Optimisation:

What is a 'media workload' (and where the money goes)?

Every media experience – from streaming a live concert to watching a news clip – relies on a multi-stage pipeline that processes, protects, and delivers content seamlessly to audiences worldwide.

Typical workflow stages include:

  • Ingest – live feeds, raw files, or user-generated content enter the system. Requires high bandwidth and elastic compute to handle spikes such as sports events or premieres.
  • Transcoding & packaging – content is encoded into multiple formats and bitrates for optimal playback across devices. This compute-intensive stage is a major contributor to public cloud costs and benefits from autoscaling, intelligent job scheduling, and spot/preemptible instances.
  • Storage & replication – assets are organised by access frequency (hot, warm, cold) and lifecycle policies. Without management, hot storage costs can balloon and unused replicas increase backup and egress fees.
  • Digital Rights Management (DRM) – protects content and ensures licensing compliance. While its direct cost is smaller, DRM depends on encryption, key rotation, and high availability.
  • Telemetry & Data/AI Pipelines – playback generates analytics that feed recommendation engines, ad targeting, and audience insights. These workloads scale with audience size and require rigorous financial accountability to control costs across the media cloud.

Mapping the cost drivers

Each of these stages carries distinct cost pressures and optimisation levers:

  • Compute: Encoding, rendering, and packaging consume CPU/GPU and benefit from autoscaling and spot/preemptible instances.
  • Storage: Tiering (hot/warm/cold/deep archive) and lifecycle policies control retention costs without limiting access.
  • CDN & Egress: Optimising cache efficiency, multi-CDN contracts, and real-user-metric routing unlock major savings.
  • Observability: Logs, metrics, and traces provide insight but can quietly add cost; retention policies and compression are essential.
  • Data & AI: Analytics and recommendation workloads require clear cost attribution for ROI and governance.

Over-provisioned transcode nodes and under-tagged CDN traffic mask true unit economics. Correcting these inefficiencies could deliver a 10–20% reduction in cloud spend in the first year while establishing governance for sustainable cost control.

With these cost drivers mapped and controlled, the next step was to ensure that optimisations were not locked into a single vendor or region. This led to a broader focus on cloud-agnostic design.

Spot aware job schedulers for background tasks

Media encoding is an example of naturally parallel and fault-tolerant, ideal for spot or preemptible instances offering up to 90% savings.

Implemented operational strategies included:

  • Checkpointed, segmented transcodes with automatic fallback to On-Demand.
  • Mixed instance strategies for cost and risk management.
  • Preemption budgets and SLA-aware max-time-to-complete policies.

These strategies maintain service reliability while drastically reducing transient compute costs.

Once compute workloads were stabilised and optimised, attention shifted to long-term storage and lifecycle management — another area where automation and tiering unlock significant efficiencies.

Archive tiering & lifecycle policies

After addressing compute and pipeline efficiencies, the next layer of optimisation lies in how content is stored and retained over time. Automated tiering ensures that storage costs align with content value and rights, without compromising accessibility or compliance.

  • Hot, Warm, Cold, and Deep Archive tiers organised by popularity and age.
  • Compliance safeguards including legal hold, immutability, and checksum/fixity validation.
  • Predictive pre-fetching to prepare for seasonal or event-driven spikes in demand.

This approach lowers overall storage TCO while preserving editorial agility and ensuring full compliance with media-specific governance and retention policies.

The Media FinOps operating model

Together, these initiatives form the foundation of a mature Media FinOps operating model – one that focuses on managing “the cost of quality” rather than simply reducing spend.

Using a structured assessment methodology, organisations can continuously align financial accountability with technical performance:

  • Comprehensive review: usage, tagging, and storage management.
  • Quick wins: rightsizing, scheduling, and licensing optimisation.
  • Targeted projects: network re-architecture and Kubernetes tuning.
  • Continuous roadmap: forecasting, anomaly detection, and unit-cost visibility.

Results:

  • Up to 20% annual cloud spend reduction on the overall budget.
  • Stronger visibility and financial accountability across teams.
  • Confidence that every cloud investment directly supports performance, growth, and innovation.

Executive takeaway:

Ultimately, media FinOps ensures that every dollar invested in cloud infrastructure delivers measurable audience and business value.

Driving revenue and shaping the future of Media with technology

Contact us to see how we can help you transform your media business.

FAQ

Can we save money without hurting quality?

Absolutely. Effective cloud cost management enables significant cost reduction without compromising Quality of Experience. The biggest early wins often come from idle storage, underutilised compute, and CDN inefficiencies within your cloud services.

Start by tuning cache strategies and introducing shielding to reduce redundant egress and replication. From there, pilot Spot or pre-emptible instances for workloads like parallel video encodes – using checkpointing and fallback to avoid service disruption.

Most organisations begin to see measurable cloud cost reduction within one to two quarters after embedding visibility and accountability into their operational rhythm.

When you connect unit economics, automation, and QoE tracking, early pilots quickly produce results – freeing up your cloud budget for innovation, performance upgrades, or audience engagement initiatives.

Value we delivered

50

monthly cost reduction achieved through proactive implementation of AWS Cloud savings plans

Let’s talk

Contact us and transform your business with our comprehensive services.

]]>
https://www.future-processing.com/blog/cloud-cost-excellence-in-media-ceo-cfo-playbook/feed/ 0
The business benefits of Continuous Integration https://www.future-processing.com/blog/business-benefits-of-continuous-integration/ https://www.future-processing.com/blog/business-benefits-of-continuous-integration/#respond Thu, 28 Aug 2025 08:45:54 +0000 https://stage-fp.webenv.pl/blog/?p=32835 What is Continuous Integration (CI)?
Continuous Integration (CI) is a development practice where code changes are frequently and automatically merged into a shared repository managed by a version control system.

Each integration process triggers an automated build and a suite of tests, including integration tests, allowing software development teams to catch bugs early and ensure improved code quality. This rapid feedback loop helps prevent issues from becoming complex or costly to fix later in the software development cycle.

Complementing Continuous Integration is Continuous Delivery (CD) – a practice that ensures code is always in a deployable state, enabling software development teams to release updates reliably and frequently.

Together, Continuous Integration and Continuous Delivery form the backbone of modern DevOps, promoting stability, speed, and scalability in software development process.

How does CI benefit the business beyond the development team?

Continuous Integration, combined with Continuous Deployment to the production environment, delivers significant value across the entire organisation by streamlining software development process and accelerating time-to-market.

Automated builds, integration tests, and deployments enable rapid, reliable releases that help businesses quickly respond to customer needs and evolving market conditions.

The benefits of Continuous Integration extend beyond technical improvements to foster better collaboration and business outcomes, including:

Faster delivery of features and fixes

Continuous Integration enables quicker deployment by automating the integration and testing of code changes. This accelerates the release cycle, allowing new features and bug fixes to reach the production environment more rapidly.

Improved collaboration and transparency

By fostering regular communication and shared workflows between operations, QA, and development teams, Continuous Integration enhances teamwork and openness throughout the project lifecycle.

Increased visibility and accountability

Continuous Integration provides clear insights into every stage of software development, making it easier to track progress, identify issues early, and assign responsibility effectively.

Reduced disruptions and lower support costs

Automated testing and integration reduce the frequency of errors and unexpected issues, minimising disruptions to users and decreasing ongoing support and maintenance expenses.

More focus on innovation and process improvement

With routine integration and testing automated, teams can allocate more time and resources towards creative problem-solving and enhancing the software development workflow.

Standardised and automated software development lifecycle

Continuous Integration establishes consistent processes that are repeatable and scalable, building a sustainable competitive advantage by ensuring quality and efficiency in every release.

Benefits of DevOps automation
Benefits of DevOps automation

Why Continuous Integration matters for product quality?

Continuous Integration is instrumental in maintaining and improving product quality from day one.

Automated tests at every integration point ensures that bugs are caught early – long before they can affect customers or accumulate as technical debt. This not only leads to cleaner, more stable code but also shortens QA cycles and reduces rework.

By enforcing code quality standards through automated tests, Continuous Integration ensures that only thoroughly tested and verified code moves forward. The result is a more reliable product, improved user trust, and fewer costly issues in production, strongly influencing the whole software development practice.

Decreasing the lead time for changes from 2 months to 1 day and saving 50% of the client’s cloud costs

The client expected significant growth and needed a much more flexible system framework and rapid product innovation. Their software needed modernisation in terms of architecture and technology used.

Thanks to our work, we decreased the lead time for changes from 2 months to 1 day, improved change failure rate from over 30% to below 10%, and saved 50% of the client’s Cloud costs.

How does CI reduce business risk?

Continuous Integration reduces business risk by offering early and continuous validation of code.

Each change is automatically built and verified with automated tests, minimising the chance of introducing regressions or breaking functionality. This real-time, rapid feedback helps software developers detect and resolve issues quickly, reducing the likelihood of production failures and customer-impacting bugs.

By maintaining a consistent and predictable delivery pipeline, Continuous Integration also safeguards brand reputation and ensures operational resilience – crucial for organisations operating in fast-moving or highly regulated industries.

What role does CI play in digital transformation?

As organisations pursue digital transformation, Continuous Integration provides the automation and agility needed to adapt quickly and innovate continuously.

Using continuous integration tools, software developers can remove traditional bottlenecks in the software development lifecycle by enabling rapid, iterative delivery – a hallmark of Agile and DevOps maturity.

Continuous Integration also fosters a culture of continuous improvement and cross-functional collaboration. By aligning with modern development practices, it allows software development teams to break down silos, respond to change efficiently, and accelerate innovation across the board.

What is the ROI of implementing Continuous Integration?

The ROI of Continuous Integration directly impacts business performance.

Faster delivery cycles and continuous testing mean quicker time-to-market, helping businesses seize opportunities and respond to customer needs in real time. Improved code quality and fewer defects translate into lower maintenance and support costs.

Continuous Integration also enhances software development teams productivity and morale by reducing manual errors and repetitive tasks.

Most importantly, customers benefit from more frequent, stable updates – driving satisfaction, loyalty, and long-term growth. The cumulative effect is a leaner, more responsive, and more profitable software development process.

Read more about our DevOps expertise:

Real-world impact: Continuous Integration/Continuous Delivery case studies

To illustrate the tangible benefits of Continuous Integration and Continuous Delivery, here are several examples of how organisations, working with Future Processing, have leveraged CI/CD to transform their software operations, reduce costs, and enhance product quality.

Restoring stability and optimising storage – GESIG

  • Challenge: After losing support from their previous software provider, GESIG faced critical operational challenges that threatened the stability of their traffic management solutions and overall business continuity.
  • Approach: A holistic strategy was implemented, prioritising critical fixes and modernising the technology stack to stabilise the application and prepare it for future growth.
  • Result: GESIG achieved a 78% reduction in client-reported bugs. This optimisation improved system manageability and reinforced GESIG’s position as a leader in reliable traffic management solutions.

Technology consulting and software modernisation – ADB SAFEGATE

  • Challenge: The need to improve a web-based platform for airport operations, enabling users to make more accurate and faster decisions through a comprehensive audit and redevelopment plan.
  • Approach: After conducting discovery workshops and a full product audit, a software makeover began, starting with architectural redesign and iterative development of key modules aligned with client needs.
  • Result: This led to a robust, ongoing partnership with ADB SAFEGATE’s European branches, delivering continuous enhancements that keep their platform agile and user-focused.

Driving monetisation and product excellence – DriveCentric

  • Challenge: DriveCentric looked for a technology partner to scale engineering efforts while maintaining product excellence and customer satisfaction in the competitive automotive CRM and AI tools market.
  • Approach: Close collaboration with key stakeholders ensured ongoing innovation, efficiency improvements, and elimination of waste to maximise ROI and unlock monetisation opportunities.
  • Result: Continuous digital product discovery and AWS cloud solutions helped DriveCentric maintain their competitive edge and deliver superior customer experiences.

IT Service Management consulting and DevOps recommendations – SNO

  • Challenge: SNO wanted to scale their service offering while incorporating DevOps standards and expertise with efficiency and business alignment in mind. During the initial talks they recognised our expertise and experience and they chose us to audit and modernise their in-house Fiber Network Management and Monitoring Platform.
  • Approach: We conducted a comprehensive audit in Software Development, DevOps, Data and ITSM followed by a phased implementation of a new Security and IT Service Management system, aligned with ISO and ITIL frameworks, coupled with process optimisation, training, and CI/CD pipeline recommendations.
  • Result: SNO optimised and secured their Network Operations Centre, enabling the company to offer formerly outsourced services directly, supported by advanced SLA reporting dashboards, integrated ITSM training and tools, CI/CD recommendations, and front-end support. These improvements ensure long-term efficiency and scalability.

FAQ

How does CI accelerate time-to-market for new features?

Continuous Integration automates integration and testing processes, enabling faster software development cycles and more frequent, reliable releases. This efficiency allows teams to deliver high quality software quickly without compromising quality.

Can CI help reduce software development costs?

Absolutely. By catching bugs early and automating routine testing and integration tasks, Continuous Integration significantly reduces the time and resources spent on debugging and manual QA. This not only lowers development costs but also improves the overall efficiency of the software lifecycle.

Can CI help scale development in growing companies?

Yes – Continuous Integration introduces consistency, repeatability, and transparency across the software development process, which is essential as teams expand and projects become more complex. It helps prevent integration bottlenecks and keeps growing codebases manageable by enforcing discipline and automation.

How does Continuous Integration help manage AI-generated code?

CI acts as the first line of defence against bogus code, whether it’s written by a human or an AI agent like Copilot. It automatically flags and rejects bugs and/or security vulnerabilities, significantly reducing the risk of flawed code committed to repository. What’s more, running automated CI jobs is often faster and more cost-effective than relying on manual work, especially when dealing with large volumes of generated code.

How does CI integrate with business-focused methodologies like Agile or Lean?

Continuous Integration complements Agile and Lean practices by promoting fast feedback, small batch releases, and continuous improvement. It supports incremental development and helps maintain momentum by ensuring that each iteration is built on stable, tested code.

Can CI be customised to align with business priorities?

Definitely. Continuous Integration pipelines can be configured to reflect specific business objectives – whether that’s enforcing regulatory compliance, optimising for performance, or meeting service-level agreements. This flexibility makes Continuous Integration a strategic tool, not just a technical one.

Why introduce DevOps in your company?

DevOps breaks down silos between development and operations, enabling faster releases, higher quality software, and more agile response to change. It boosts collaboration, automates workflows, and accelerates innovation.

Ready to deliver faster and smarter? Let’s talk about DevOps.

]]>
https://www.future-processing.com/blog/business-benefits-of-continuous-integration/feed/ 0
What is infrastructure automation and how does it work? https://www.future-processing.com/blog/what-is-infrastructure-automation/ https://www.future-processing.com/blog/what-is-infrastructure-automation/#respond Thu, 15 May 2025 13:41:19 +0000 https://stage-fp.webenv.pl/blog/?p=32404
What is infrastructure automation?

Infrastructure automation refers to the practice of using scripts, tools, and AI-driven processes to manage, configure, and provision IT infrastructure without manual intervention.

By automating repetitive tasks, organisations can minimise human errors, boost operational efficiency, enhance security, and accelerate deployments. This streamlined approach ensures consistency across IT environments and reduces operational costs, allowing teams to focus on innovation rather than routine maintenance.

Infrastructure automation
Infrastructure automation


What is the difference between configuration management and infrastructure automation?

Configuration management and infrastructure automation are closely related but serve distinct purposes in IT operations.


Configuration management

Configuration management focuses on automating software and system settings. It ensures that applications, operating systems, and services are consistently configured across all environments. Configuration management tools like Ansible, Puppet, and Chef help enforce desired states, manage dependencies, and apply updates efficiently.


Infrastructure automation

Infrastructure automation, on the other hand, goes further by provisioning and managing entire IT environments, including servers, networks, storage, and cloud resources. It handles the deployment, scaling, and orchestration of infrastructure components, often using tools like Terraform, AWS CloudFormation, or Kubernetes.

While configuration management ensures systems remain in their intended state, infrastructure automation lays the foundation by setting up and maintaining the underlying infrastructure.

Together, these two practices create a seamless, automated IT ecosystem that enhances scalability, reliability, and operational efficiency.


What IT infrastructure processes can be automated?

Let’s now have a closer look at the key IT infrastructure processes that can be automated:

  • Infrastructure provisioning & deployment – automating the setup and deployment of servers, networks, and cloud environments to improve speed and reduce errors.
  • Infrastructure as Code (IaC) – defining infrastructure components as code to ensure consistency, repeatability, and error minimisation.
  • Configuration management – automating virtual machine and software configurations to maintain uniformity across environments.
  • Cloud resource management – optimising and automating the allocation and scaling of cloud resources to enhance performance and cost efficiency.
  • FinOps & cloud cost optimisation – using automation to monitor cloud spending, enforce cost-saving policies, and optimise resource usage.
  • Continuous Integration/Continuous Deployment (CI/CD) – automating infrastructure reliability checks, security vulnerability scanning, policy verification, and seamless deployment of infrastructure updates.
  • Container orchestration & microservices management – managing microservices and containers efficiently using tools like Kubernetes.
  • Monitoring and logging – continuously tracking system performance and security to detect and resolve issues proactively.
  • Security & compliance automation – enforcing security policies and regulatory compliance through automated access control and vulnerability scanning.
  • Kubernetes automation – managing Kubernetes clusters and namespaces efficiently in hybrid cloud environments.
  • Multi-cloud automation – extending automation across multiple cloud providers like AWS, Azure, and Google Cloud.
  • Network automation – automating networking and security services to accelerate deployments.
  • DevOps for Infrastructure – implementing IaC with infrastructure pipelines to enable iterative development and automated deployments.

Drive revenue growth and enhance operational efficiency by migrating your infrastructure to a modern cloud-based environment.

Reduce operational costs and free-up physical space. Enhance reliability and easier monitoring in your organisation!


What are the key benefits of infrastructure automation?

Infrastructure automation work provides numerous benefits that help improve efficiency, security, and cost management. The most important ones include:


Faster provisioning and scaling of IT infrastructure resources

Automation reduces provisioning times from hours or days to minutes, enabling rapid deployment of resources and seamless scaling based on real-time demand.


Reduced operational costs and elimination of labor-intensive tasks

Automating tasks like configuration management and software updates lowers labor costs and minimises human intervention, freeing IT teams to focus on more strategic initiatives.


Improved consistency and security across environments

Automated workflows enforce standardised configurations across on-premises, cloud, and hybrid infrastructures, reducing errors. Security policies embedded in automation scripts ensure continuous compliance and vulnerability scanning.


Enhanced system reliability and uptime

Automation includes self-healing mechanisms, continuous monitoring, and automated failover processes, minimising outages and improving service availability.

Key benefits of infrastructure automation
Key benefits of infrastructure automation


What industries benefit the most from infrastructure automation?

Infrastructure automation is particularly beneficial for industries that rely on scalability, strict compliance requirements, and cost optimisation. Some of the key industries that gain the most from automation include:


Finance

Banks, investment firms, and fintech companies require highly secure, compliant, and scalable infrastructure to handle large transaction volumes and sensitive customer data. Automation helps ensure regulatory compliance, streamline disaster recovery, and enhance fraud detection using real-time monitoring tools.


Healthcare

Hospitals, telemedicine providers, and health tech companies must maintain secure, HIPAA-compliant environments while managing vast amounts of patient data. Automation improves data security, facilitates rapid infrastructure scaling for patient portals and digital health applications, and ensures high availability for critical healthcare systems.


Retail & E-commerce

Retailers and online stores experience fluctuating demand, requiring dynamic scaling of IT resources during peak shopping seasons. Automated infrastructure enables seamless traffic handling, optimises inventory management, and improves website performance through continuous monitoring and deployment.


Manufacturing

Smart factories and industrial automation rely on real-time data processing and IoT-driven systems. Automating infrastructure supports predictive maintenance, optimises supply chain logistics, and enhances operational efficiency by integrating AI-driven analytics.


SaaS & Technology

Software-as-a-Service (SaaS) providers, cloud platforms, and tech startups need rapid deployment, continuous integration, and high availability. Automation accelerates software delivery, enhances security, and optimises cloud resource allocation, reducing costs while improving performance and scalability.


What are the main infrastructure automation tools?

Some of the most popular infrastructure automation tools include:

  • Terraform (Infrastructure as Code – IaC): an open-source tool that allows users to define and provision infrastructure using declarative configuration files, ensuring consistency across cloud and on-prem environments.
  • Ansible (configuration management): A simple yet powerful automation tool that automates application deployment, system configuration, and IT orchestration without requiring agents on target machines.
  • Puppet (automated deployments): a robust configuration management and automation tool that helps manage infrastructure at scale, ensuring consistent deployments and enforcing security policies.
  • Kubernetes (container orchestration): a widely used container orchestration platform that automates the deployment, scaling, and management of containerized applications, enhancing efficiency in microservices architectures.
  • Chef (IT automation for cloud and on-prem): A configuration management tool that automates infrastructure provisioning, ensuring consistent software configurations across servers and cloud environments.


What are the biggest challenges of implementing infrastructure management automation?

While infrastructure automation offers significant benefits, organisations often face several challenges when implementing it. Let’s have a closer look at some of the biggest obstacles:


Complexity of legacy systems

Many businesses rely on outdated infrastructure that was not designed with automation in mind. Migrating or integrating automation with legacy systems can be complex, requiring significant time and resources to modernise existing environments.


Skill gaps in automation tools

Implementing infrastructure automation requires expertise in tools like Terraform, Ansible, and Kubernetes, as well as proficiency in scripting languages. Many IT teams lack these specialised skills, making it difficult to adopt automation without additional training or hiring experienced professionals.


Security and compliance concerns

While automation can improve security, it also introduces new risks if not properly implemented. Misconfigurations in automated scripts can lead to vulnerabilities, and ensuring compliance with industry regulations (such as GDPR, HIPAA, or PCI-DSS) requires careful policy enforcement within automation frameworks.


Integration with existing IT environments

Businesses often operate hybrid or multi-cloud environments with a mix of on-premises and cloud-based resources. Ensuring smooth integration between automated processes and existing IT infrastructure can be challenging, requiring robust planning and compatibility testing.

Future Processing can assist businesses in overcoming these challenges by offering expertise in infrastructure automation strategy, Infrastructure as Code (IaC) implementation, DevOps consulting, automated infrastructure management, and cloud automation solutions.

If you are keen to unlock the full potential of infrastructure automation, get in touch today!

]]>
https://www.future-processing.com/blog/what-is-infrastructure-automation/feed/ 0
DevOps tools that give you superpowers https://www.future-processing.com/blog/devops-tools-that-give-you-superpowers/ https://www.future-processing.com/blog/devops-tools-that-give-you-superpowers/#respond Thu, 26 Oct 2023 07:36:57 +0000 https://stage-fp.webenv.pl/blog/?p=27191 We’ve all been there. And it was never easy. And huge manuals never helped. Fortunately, at Future Processing we’ve built some pretty amazing DevOps tools which help people solve such problems in a much better way. Let’s look at how they affect the way we work and which of them have already made our lives so much easier.


DevOps tools make people more engaged

Without DevOps tools, an account manager who would have to solve a problem with a software would need to rely on other people to assemble various components so that they matched the specific setup. It would take time and the success would rely on the memory and availability of these people.

DevOps tools allow to deal with such matters independently of others. All versions of software are stored in an artifacts repository and thanks to reproducible code build on CI server you can recreate every component. Assembling the needed setup means simply checking out a branch containing the configuration code and running the release pipeline on a testing environment. This in consequence allows the scripts to deploy correct component versions and automatically apply customer-specific configurations.

What’s more, the configuration branch can be easily transformed into a pull request, allowing others to review it – it’s a valuable learning and collaborative experience.


DevOps tools give people confidence

Our work is very often concentrated on handling minor features, bug fixes and on ongoing improvements. With a smaller scale of those works, the release becomes much more manageable. The frequency of these daily releases has a huge impact on our confidence. The standard workflow involves creating a pull request with the proposed changes, followed by automatic validation by CI system, peer reviews, approval, margining and deployment. The review process serves a dual purpose – it provides an extra layer of verification, and it fosters learning and collaboration.


DevOps tools give you quick feedback and reduce lead time

One of the principles of the DevOps approach is a rapid feedback loop, meaning if there is mistake in the configuration, the automated validation process catches it immediately. To identify an error in the application of specific settings you need a few hours. The automated deployment routine provides feedback on the success of any change on the very same day you introduced it.

Also, code reviews comments are an invaluable source of knowledge, as they help to gain a deeper understanding of the system and allow you to improve your skills.

In DevOps, the person responsible for the deployment of the change is its initiator – this is how you learn whether the system really works or how it breaks. Over time, monitoring the deployment progress and observing the system’s behaviours, such as changes in metrics and logs immediately afterwards, becomes your second nature.

Another benefit is that DevOps approach dramatically reduces delivery times so that a client can get the value quicker or provide feedback if something else was expected.


And now the promised overview of the actual DevOps tools and how we used them


Infrastructure stack

Our principle was to ensure that the entire infrastructure was automated. We started with the provisioning of virtual machines and networks in the AWS cloud using Terraform, a cloud-agnostic infrastructure as code tool. We decided to use Terraform so that our customer’s infrastructures could be hosted on various providers.

That’s done, we used Ansible, which applied the operating system configurations and installed the necessary tools. We only had a few smaller roles in Ansible – it was a deliberate decision as it enhanced our ability to manage the system while also bolstering security by reducing the potential attack surface.

All the heavy lifting was done by Docker Swarm. Each application had its dedicated Docker image, complete with all the runtime dependencies, and Swarm efficiently managed the workload across a fleet of virtual machine nodes.

We chose Docker Swarm as at the time it was actively developed by Docker Inc. It was much more straightforward comparing to Kubernetes, which back then was not as feature-rich as it is today.


Infrastructure as Code

The fact that all the infrastructure is stored as code within a single comprehensive repository had a big impact on collaboration and promotion of shared responsibility. What mattered most was the transparency that effectively combatted possible knowledge silos.

Another important aspect was the ability to comprehensively view and simultaneously compare all development and testing environments. It greatly helped us to understand which features were being deployed and tested at each stage. And although the infrastructure we worked on was extensive and intricate, we had just one highly skilled specialist involved.

According to DevOps there is no strict division between development and operations, which means everyone was encouraged to make configuration changes, deploy code, and contribute to the setup of the infrastructure.


Executable documentation

Our system was installed, configured and operated by people with different skills, which is why we preferred to use tools that were straightforward and easy to understand. The majority relied on YAML configuration files for Docker Swarm, sharing the same format as Docker Compose, which made it especially convenient to set up on any machine that runs Docker.

A comparable approach was used with our deploy .sh Bash script, which codified the steps an operator would manually enter on the machine. The script was enriched with comments, effectively transforming it into executable documentation that was run on daily basis. Such an approach eliminated the need to repeat commands from an operational manual.

The distinct separation of various layers within our infrastructure (virtual machines through Terraform, operating systems via Ansible and applications via Docker Swarm) allowed customers the flexibility to choose which components to use. This modularity proved to be of utmost importance, particularly in a situation where a public cloud environment was not a viable option.

And there was a bonus: a script designated to generate release noted based on code and source control metadata. It was a natural outcome of our commitment to associate each commit message with a corresponding Jira ticket. It allowed us to automatically generate changes in our release notes from this data. What’s more, our installation instructions were essentially copy-pastes of the scripts we had prepared, resulting in comprehensive documentation with minimal effort.

DevOps tools Future Processing


So, is DevOps really that good?

Are you still wondering whether the DevOps approach we adopted was that good and worth it? Would we recommend it to others? And to whom exactly?

The extent of automation involved makes DevOps undeniably beneficial for medium-sized projects which involve dozens of people. When it comes to larger projects, I would say it’s a proper must have – without it, a lot of precious time is wasted on repetitive, everyday tasks. But we’ve used the same approach, albeit with less extensive tooling, for smaller teams comprising 5 to 7 developers. And we still got all the benefits!

Key benefits of using DevOps tools

You may think DevOps is just about technology. I would rather say it’s about people. Implementing DevOps foster a more engaged team that feels empowered and accountable for the product. This, in turn, leads to numerous opportunities for learning, fuelled by candid feedback and a boost in confidence. I’m writing more about it on my personal blog: check it out. Working in such an environment is truly a pleasure and I cannot stop feeling the benefits of it!

]]>
https://www.future-processing.com/blog/devops-tools-that-give-you-superpowers/feed/ 0
Alternative to Docker Desktop – solutions to consider in 2022 https://www.future-processing.com/blog/docker-desktop-solutions-to-consider/ https://www.future-processing.com/blog/docker-desktop-solutions-to-consider/#respond Tue, 25 Jan 2022 08:25:00 +0000 https://stage-fp.webenv.pl/blog/?p=18854 If you are using Docker Desktop for your software development work, now it’s time to consider whether to stick to it or change it for an alternative. Keen to know what such a change could mean for you and whether there are any good alternatives you should think of? Do read on!


Docker as a tool

Docker is a popular open-source tool that uses OS-level isolation for managing the process of containerisation of an app. Since its release in 2013, it has been used by developers all over the world to develop, share, and run applications. As it separates the applications that are being developed from the infrastructure, it allows delivering software quickly and easily.

As announced last year, 2022 brings changes into Docker’s subscription plans.

As of February, Docker Desktop becomes a payable service. While it will still be free for personal and non-business use, larger businesses will need to pay to be able to continue using it.

This is why we encourage you to read about an alternative to Docker Desktop that can do the job well and remain cost-effective.


An alternative to Docker Desktop – Docker Engine and WSL2

A good alternative to Docker Desktop is Docker Engine on WSL2. Docker Engine is one of the parts of the Docker Desktop system (it is software that hosts the containers), other parts being Docker CLI client, Docker Compose, Docker Content Trust,  Kubernetes, and Credential Helper. As stated on Docker’s website, Docker Engine works as a client-server application with:

  • a server with a long-running daemon process dockerd.

  • APIs which specify interfaces that programmes can use to talk to and instruct the Docker daemon.

  • a command line interface (CLI) client docker.

Docker Engine can be installed on WSL2 (Windows Subsystem for Linux) and does not require the actual Docker Desktop, so it is a great alternative, sufficient in the every-day work of software developers. It’s not a new solution –  many developers have been using it for some time now, especially if they wanted to avoid using the robust and sometimes time-consuming Docker Desktop system.


Here you can find and download step-by-step instructions on how to install Docker Engine on WSL2. It has been created by our software engineers. Here’s a sneak peek:

How To Install Docker Engine on WSL2.zip

Instructions.zip


What’s important, all the changes in Docker’s subscription plans apply to Docker Desktop, meaning Docker Engine is unaffected and can be used without any extra costs.

If you are keen to discuss the solution you are using at the moment or if you are worried whether the change will not have too much of an impact on your software, get in touch with our team! We will be happy to explain the situation in more detail and speak about alternatives.

]]>
https://www.future-processing.com/blog/docker-desktop-solutions-to-consider/feed/ 0
Does Docker make sense in 2021? https://www.future-processing.com/blog/does-docker-make-sense-in-2021/ https://www.future-processing.com/blog/does-docker-make-sense-in-2021/#respond Wed, 19 May 2021 09:58:59 +0000 https://stage-fp.webenv.pl/blog/?p=15243
What are Docker and Kubernetes?
Docker makes it possible to pack an application (e.g. a JAR file with compiled Java code) along with the runtime environment (e.g. OpenJRE JVM) into a single image, which then generates containers. In fact, all the dependencies in the operating system are added to the Docker image.

This allows using the same package on the developer’s laptop, in the test environment, and in production. So much for theory.

Kubernetes is an orchestration system which manages several containers and attributes resources to them (CPU, RAM, storage) from a number of machines in the cluster.

It is also responsible for the lifecycle of the containers and joining them into Pods. Compared do Docker, it operates on a higher level, as it controls many containers on many machines.

If the Docker container is an equivalent of a virtual machine, Kubernetes is like a hosting or cloud provider. Docker (or Docker Compose) helps run various processes, combine them in a network, and attribute storage within one computer. Kubernetes can do the same within a cluster composed of a couple of computers.

Kubernetes brings Docker down to the level of a component which runs the containers. Thanks to the introduction of the CRI standard (Container Runtime Interface), these components can be changed. At present, only containerd and cri-o are compatible with the CRI. Docker requires the dockershim adapter, which is precisely what the programmers supporting Kubernetes want to get rid of.

Docker Kubernetes


Why does Docker matter?


In the past few years, Docker has transformed from dotCloud’s side project into a billion-dollar business. Despite the 280 million USD venture capital funding, Docker, Inc. didn’t do well as an enterprise and was subsequently bought by Mirantis. The purchase amount was not published, which is quite curious. I guess they got a bargain.

Mirantis’s flagship product is Kubernetes-as-a-service – and so, they compete with VMWare and, obviously, with the cloud providers. Kubernetes is so important for the company that they wanted to maintain Docker Swarm for two years only, but they quickly backed out of that, surely pressurised to do so by their current clients. Actually, I know a company that has a large Docker Swarm installation and migrating to another solution would be a difficult affair for them.


What is Docker Swarm?

Docker Swarm is an orchestration system built in Docker’s distribution.


It’s a bit like Kubernetes which is as easy to use as the regular Docker. Of course, there’s more to that – nodes, replicas, grids – but still, the cluster view is really simplified in comparison with Kubernetes.

Does that mean that Mirantis bought a product which is competition for their flagship one? Yeah, in a way. Docker Swarm has already recovered from its childhood diseases (e.g. the bug of assigning duplicated IP addresses), so it looks like a reliable product for small teams. The problem is that small teams and small clusters make small money.

Besides, Docker Swarm is simply too simple. In our team, it took a single person to create and manage Docker Swarm clusters. Apart from the bugs that I’ve mentioned, there’s not much work with that. Large updates arrive along with Docker, which doesn’t happen too often, so that’s another problem gone.

What is Docker


What is the agenda of Mirantis (owner of Docker Enterprise)?


The question is: if Mirantis makes money on Kubernetes-as-a-service for enterprise clients, and Kubernetes removes Docker support, what is that all about? From my point of view, a company that profits from Kubernetes has no reason to invest in Docker once it is no longer supported by Kubernetes.


Mirantis takes care of the current enterprise customers


Later in December, Mirantis announced that they were not going to support dockershim (Docker’s adapter to the CRI interface) with Docker, Inc. They explained that their current customers used more complex Kubernetes installations, which were dependent on specific Docker Engine features. What does that change? The situation resembles that related to Docker Swarm. Mirantis’s technical debt will rise (more on that below).

These are only my private musings. I haven’t seen Mirantis’s contract with Docker, Inc., or their strategy. I rely on official press releases and my observation of the market. Trying to imagine what a big company might do based on your own experience is an interesting mental exercise, which allows you to look from a distance at corporations that produce the technologies you use. I do recommend that.


What does that mean?


Over the years, Docker Engine has grown and evolved into a modular architecture. Various implementations of components like logging have emerged. This gave room for standardising and simplifying the application architecture. It was enough for an app to log to Linux streams stdout oraz stderr, and Docker collected and stored logs locally. It also provided access to the interface in the form of docker logs commands to read those logs.

For developers, it’s very convenient to have a single tool to view app logs written in Java, Node, PHP, or other languages. IT Ops, who maintain the systems, find other things important as well: the guarantee that the logs won’t get lost, their retention, the speed with which they fill up disk space. This is a completely different set of problems, and Docker cannot solve them.


A single machine vs. a cluster


What works perfectly for a single machine might not be so great in a cluster. A good example here is docker service logs, which is an equivalent of the log viewer for Docker Swarm (the cluster orchestrating system for Docker). Unfortunately, in this case, the logs are not displayed in a chronological order, which might be caused by the time differences between the particular machines in the cluster.

The use of NTP may reduce this problem to a certain extent, but that’s not the best solution. In the case of log order, it’s better to use a central aggregator that can add a timestamp at the moment of receiving the log. This is a solution of a different level, although normally necessary in distributed systems such as clusters.

In short: it’s great that Docker can facilitate and standardise so many things. And yet, these improvements make sense only if you work on a single machine (like logs). If you use clusters, the matters get complicated and the facilitation becomes an obstacle.

To explain the Docker’s incompatibility, I will use the Cynefin framework terminology. There is a problem with a distributed system, which is intrinsically complex, and the solutions applied are addressed to intrinsically complicated systems.

In other words: the solutions chosen by Docker might work for a single machine, but they are not suitable for clusters, which contain a number of machines.


Programmers


For programmers, technically, little changes. We are still going to build Docker images, as they are compatible with the OCI standard (Open Container Initiative). This means that any compatible CRI will be able to run these images, be it locally or in a cluster.

In my opinion, the most important changes refer to the way we think about containers. It’s time to abandon the analogy of a container as a “light virtual machine” and take the cloud-native applications into account. What’s crucial here is that one container is one process and that resources, just like files, are temporary.

On the other hand, it’s time to start perceiving a container as an instance of an application running in the cloud. We need to remember that there will be many copies of the app and that nobody knows on what machine a container will be used. In result, we cannot rely on local files, as the new instance of the container won’t have access to those saved by the previous one.

Another issue is the single process in a container. Managing the container lifetime, load balancing between the instances – these should be left to the orchestrating system (like Kubernetes). I’ve encountered a problem in database migration, where some of the containers didn’t get a new address, because PM2 ran in the container instead of the direct Node (process manager for Node) and restarting the container didn’t bring the desired effect.

If the target deployment environment is Kubernetes, I think it’s also a good idea to get interested in solutions that make it possible to comfortably run apps on a local Kubernetes cluster. I’m referring here to tools such as Skaffold (from Google), Draft (Microsoft), Tilt, or KubeVela.

Docker Compose works fine if the target environment is Docker Swarm, because they use the same file format of YAML – but that’s a different story. I’m talking about a rapidly changing market here and I’m going to find something for me in it.


SREs / IT Ops


For SREs and IT Ops (the people who maintain the infrastructure), the things get more complicated if Docker Engine is used in Kubernetes. It might be enough to use containerd as the CRI implementation. In this case, it all depends on how many Docker Engine dependencies have leaked to the infrastructure.

Docker-in-Docker(DinD) is a good example here. It is used to build images on CI servers (Continuous Integration). As early as 2015, DinD for CI was considered a bad practice, but before that could become common knowledge, it turned out that DinD was a sort of quick win in the CI environment in a cluster.

Of course, Mirantis will listen to any complaints concerning the dependence of the complicated CI system on Docker-in-Docker or another legacy. After all, they declared they were going to support dockershim and they make money on that. I wonder, though, how much money it takes to get rid of such a problem.

I’m interested in this personally, because I’m working on a complex CI, which uses DinD to build Docker images. I realise that 2021 is a year of preparing the transition to a different CRI or dockershim. I’m afraid that the latter option will mean throwing yourself at the mercy of Mirantis, which may be a strategic risk. We’ll see.


Consultants


For consultants, this kind of transition is great news. After carefree years of using Docker in developer teams, it’s time for tidying up, teaching and applying good practices. All with the objective of smooth transitioning to more restrictive environments, like Kubernetes.

Personally, after six years of using Docker (including three years with Swarm), I’m learning new runtimes, orchestrating systems, and cluster management. These trends are now emerging within the scope of interest of corporations other than the tech giants.

On the other hand, companies such as Mirantis or VMWare have a vital interest in implementing and supporting clusters, charging a lot for that. The same applies to the cloud providers: AWS, Azure, or GCP, which offers hosted Kubernetes. Suffice to say that the independent provider Linode has offered the Linode Kubernetes Engine (LKE) since 2019.


Summary


So, should you worry about the fact the Kubernetes deprecates Docker? You should… and you shouldn’t. I believe that the cloud business will grow and more and more companies will migrate to the cloud. In this environment, application containers and horizontal scaling (for multiple machines) are a natural direction. Perhaps we’ll just have to live with the complications resulting from the use of clusters.

If horizontal scaling is inevitable, Kubernetes will make a lot of things easier, even though it seems complicated in itself. In fact, it is extended because the problem it solves (allocating resources in a cluster) is essentially complex. In this case, Kubernetes provides basic tools and terminology to deal with this complexity.

With time, solutions simplifying Kubernetes will surely turn up. All the cloud providers already offer the service of hosted Kubernetes cluster, which releases IT Ops from many operational tasks. Thanks to the fact that Kubernetes configuration takes the form of declarative YAML files, it will be possible to construct tools that will enable you to kind of click your way through the clusters. I’d even say that Kubernetes YAML will be to clusters what HTML is to the World Wide Web.

Do you want to know how Cloud can help your business develop?

]]>
https://www.future-processing.com/blog/does-docker-make-sense-in-2021/feed/ 0