How to choose which IT conferences to attend

Share on facebook
Share on twitter
Share on linkedin

Source: CIO Magazine On:

Read On

From user symposiums to vendor-neutral events, there’s no shortage of tech conferences to sign up for. Each offers unique upside, whether it’s strategic insights into how another company implemented a new technology or simply an opportunity to grow your network. But with the need to learn almost everything in a fast-paced profession, and the time that conferences take away from pressing work, how do you know which ones are worth attending?

To read this article in full, please click here

(Insider Story)

This post was originally published on this site

You Might Also Like

6 Kubernetes workflows and processes you can automate

Once upon a time, the concept of “workflow” in Kubernetes might have seemed a bit antithetical.
Consider this older definition from the business process management world, via Wikipedia: “A workflow consists of an orchestrated and repeatable pattern of activity, enabled by the systematic organization of resources into processes that transform materials, provide services, or process information.”
Kubernetes Operators are a particular boon for workflow automation.
“Orchestrated and repeatable” sure sound relevant to Kubernetes, but that definition (and many of its variants) also suggests state – something Kubernetes and containers in general weren’t thought to be good at in their initial phases. In the early days, a term like “stateful application” would have been viewed as a no-go for Kubernetes, says Ravi Lachhman, evangelist at Harness.
“Since containerized workloads are ephemeral and meant to terminate quickly and gracefully, this would not be conducive to a workflow that would be long-lived,” Lachhman. “Inherently, workflows are stateful [and] need to live to give decisions or move forward workloads.”
[ Kubernetes 101: An introduction to containers, Kubernetes, and OpenShift: Watch the on-demand Kubernetes 101 webinar.]
As Kubernetes and its ecosystem have evolved, however, automating certain workloads and processes – including for stateful applications – has become much more achievable. Lachhman points to Kubernetes Operators as a particular boon for workflow automation – we’ll discuss this in detail at the end of this post.
Workflows you can automate with Kubernetes
In the meantime, we asked Raghu Kishore Vempati, director of technology, research, and innovation at Altran, to take us through some key examples or workflows or processes that can be automated with Kubernetes. Gary Duan, CTO at NeuVector, also shares some thoughts from a security automation standpoint.
1. App setup/Installation
For large solutions that comprise several (or more) applications, the setup and installation process almost inevitably requires automation to reduce the operational burden, Vempati notes. Cluster management can otherwise become significantly complex, says Vempati, especially as various installation units are further versioned.
“Adopting a DevOps approach, Continuous Delivery of applications and their configuration on a Kubernetes cluster can be completely automated,” Vempati says. “A single installation/setup for a particular logical app plane could consist of several resources that could include deployments, services, secrets, stateful sets, etc. Several such installation units belonging to a single application/solution can be orchestrated in an order to be set up on the K8s cluster.”
[ Kubernetes terminology, demystified: Get our Kubernetes glossary cheat sheet for IT and business leaders. ]
2. Pod and node scaling
Dynamic pod scaling is considered one of Kubernetes’ particularly important features. Vempati notes that the Horizontal Pod Autoscaler, or HPA, enables the system to scale pods based on common system metrics (e.g. CPU utilization) or custom app metrics (e.g. number of requests), which are configurable for the cluster.
“However, the configuration itself could be subject to change for various resources on the cluster,” Vempati says. “This process of configuring the scalability behavior of the pods itself can be automated and can be subject to various factors and dependencies within and outside the application/solution.”
There is a corollary here with another key concept in Kubernetes architecture: nodes.
“K8s by itself doesn’t provide for node scaling,” Vempati says. “It [does], however, have the capability where nodes can be added or removed from the cluster without impacting the execution of applications.”
Various cloud platforms offer automation of node scaling as part of the platforms or services, according to Vempati, including for hybrid or on-premises environments. “As with pod scaling, while the node scaling can be configured, the process of configuring the scalability of the nodes itself can be automated based on factors internal and external to the application/solution,” Vempati says.
[ Read also: 5 open source projects that make Kubernetes even better. ]
3. Persistent storage management
Stateful applications such as databases are no longer taboo in Kubernetes environments – far from it.
“K8s has many features that help run stateful applications,” Vempati says. “For example, it provides the ability to dynamically provision storage volumes on-demand.”
Vempati also points to the ability to clone persistent volumes for storage systems that implement the Kubernetes CSI spec, as well as the ability to capture snapshots of the volumes that are accessed by applications, as key features on this front.
“For large applications running in production, these capabilities are very useful,” Vempati says. “For applications that require high availability of data, having the latest snapshots of data and the time to restoration of access to the data is very critical. Automation of the above-mentioned capabilities and their associated configurations for the applications can help achieve the same.”
Let’s examine three more important examples of what you can automate:

Read More »

How to improve customer experience strategy: 6 tips

As smart, connected products become more ubiquitous, we’re seeing the demise of the traditional value chain, in which there’s a clear beginning (when a product is designed) and an end (when it is sold to consumers). In our book, “Reinventing the Product,” Eric Schaeffer and I offer insight into how organizations must adapt to stay relevant and competitive in today’s fast-changing, connected new world.
In this new reality, traditional hardware-centric products are becoming “containers” for software and AI features. Companies, in turn, are shifting their focus to become more responsive, adaptive, and collaborative while creating and continually updating a compelling user experience. Here are a few examples:
[ Is 2020 the year of the pivot? Read: Now or never: Why CIOs must future-proof IT workforce strategy. ]
Caterpillar’s new generation of industrial equipment is being integrated with Cat Connect, a platform that allows the company to offer its customers telematics-driven services such as remote troubleshooting or performance optimization.
Tesla’s sophisticated software platform for its cars allows the continuous release of new features and functionality via remote updates. An example is the updates to enable self-driving technology, which relies heavily on advanced AI technologies.
Signify (formerly Philips Lighting) and its Hue lighting platform lets users control their lighting systems via smartphone, transforming everyday lighting into a personalized experience. Users can play with colors, sync lights with music, TV, and games, and more. The platform also enables hundreds of third-party developers to create lighting applications.
While smart technology unlocks new value for today and future generations of products, it also puts new demands on CIOs and IT professionals because more products are constantly in flux. Hardware is no longer the differentiator – software-driven products are creating the value proposition, and dashboards are becoming highly customizable digital interfaces that can be updated remotely.
Five industry shifts
Here are five trends that are shaping the path forward for many businesses:
The fundamental business model is shifting from transactional product sales to a recurring, “as-a-service” model.
The traditional features that have historically served to differentiate devices have become less important to users, who are more focused on comprehensive experiences and outcomes.
Formerly insular products are transforming into connected platforms, complemented by other technology components or services.
The fundamental business model is shifting from transactional product sales to a recurring, “as-a-service” model.
The behavior and inner workings of products have shifted from mechanical functions to software and AI control.
Production of smart, connected products has changed from a linear value chain to looped iterations in agile and manufacturing processes.
Building a customer experience strategy: 6 tips
As a result, leading companies are creating customer experience roadmaps along with traditional product feature roadmaps to plot the evolution and improvement of the customer experience over time. Consider these six tips:
1. Participate in open source communities to spur new applications and software development. 
2. Transition products into platforms to increase the number of users and interactions throughout your ecosystem.
3. Keep a focus on great hardware engineering. While differentiation puts more ownership on software, successful hardware engineers will use new and adaptive high-tech materials and fabrics, learn newer skills using 3D printing, and have a deeper understanding and appreciation of data for both hardware and software.
4. Think through the customers’ point of view and endpoint and deploy as-a-service models that give users more control over their outcomes, and companies more predictable revenue streams.
5. Implement data privacy and security protection by partnering to gain the expertise you need.
6. Enable a remote workforce to manage constant change. Apple and other companies, for example, have seen great success in their remote collaboration and product development.
Are you ready for forever beta-mode?

Industries are in constant change and the companies that survive and thrive will be the ones that can adapt their products and business models to meet the new demands. While always-on connected functionality was once considered a nice-to-have feature, products are now in forever beta-mode. Consumers have high expectations, and organizations must respond to changing customer demands and expectations in real-time.
[ Culture change is the hardest part of digital transformation. Get the digital transformation eBook: Teaching an elephant to dance. ]
Primary Image: Article Type: ArticleTags: IT StrategyCustomerLeadershipShow in “Popular content” block: Related content: How to bring the best ideas to life: 6 stepsNow or never: Why CIOs must future-proof IT workforce strategyCIO role 2020: Everything you need to know about today’s Chief Information OfficersIs this a Featured Content?: 
Subhead: 

Want to create customer experience roadmaps to plot the evolution and improvement of customer experience over time? Consider this advice

CTA: Subscribe to our Newsletter

Read More »

IT careers: How to get a job as a data engineer

Editor’s note: In this ongoing series for IT job hunters, we’ll explore in-demand roles, necessary skills, and how to stand out in an interview. Here, Jayaprakash Nair , head of analytics for Altimetrik, shares insights on getting a job as a data engineer.
Data engineer salary range:
$65,000 – $132,000 per year. Source: PayScale.
In a nutshell: What is a data engineer?
A data engineer is responsible for ingesting data from different data sources into a central repository, such as a data lake/warehouse. They are also responsible for setting up the automated pipeline so that data can be brought into the data lake in a regular manner with the least impediments, issues, and data loss as possible.
Data engineers are also responsible for cleaning and organizing the data (data quality and data transformations) to ensure that it becomes the single source of truth. Other roles include adding a layer of accelerators on top of the data – especially if it is big data – so it can more easily be used by downstream consumers, and, in certain cases, cataloging the data.
This role is becoming increasingly critical, not only because of the exponential increase in data (plus related data outside of a company), but also because of an exponential increase in the understanding at the executive level that significant, critical business insights can be mined from this data.
This role is not the same as a data scientist. While a data engineer ingests data from various sources and ensures that it is clean and secure, a data scientist is a consumer of this data. The main responsibility of a data scientist is to unearth valuable nuggets of information from this accumulated data, or perform more advanced predictive analytics.
[ Looking for a data scientist job? See: IT careers: How to get a job as a data scientist ] 
What skills are needed?
The skills expected of a data engineer have evolved over time. A few years ago, the main skills were SQL, OLTP, OLAP, Data Warehousing, etc. Then came the era of big data and Hadoop, and the skills expected became HDFS, Hive, Pig and other members of the Apache stack. Now that cloud providers are beginning to get a good grip on the market, knowledge of the managed data services from cloud providers are becoming center stage. These skills are desirable because they aid in the data engineer’s ability to quickly collect relevant data.
How to stand out in a data engineer interview
Everyone can talk about their past job experience. To stand out, you could consider having a distinguished portfolio of solutions/code outside of your work, like in GitHub, for example.

Also, consider participating in technical communities, as many serious headhunters scout around in these areas for prospects.
In an interview, data engineer job seekers should expect questions around SQL (intermediate and advanced). Apart from that, they will likely be asked questions on the pitfalls to keep in mind while designing a modern day data lake, how to ensure good data quality in the lake, AWS/Azure/GCP managed services for data and analytics, data governance, security, and building semantic data marts from the lake. 
[ Read also: IT careers: How to job hunt during a pandemic. ]
Primary Image: Article Type: ArticleTags: IT StrategyIT TalentShow in “Popular content” block: Related content: IT careers: How to job hunt during a pandemicDigital transformation: Why data leaders must play offense during COVID-19IT careers: 10 most in-demand IT jobs nowIs this a Featured Content?: 
Subhead: 

Data engineers play a critical role in mining useful business insights. Here’s how the role differs from a data scientist – and what you should know to land a data engineer job

CTA: Subscribe to our Newsletter

Read More »

Leave a Reply

CIO Newsletters

Copyright ©  2020  CIO Portal. All rights reserved.