Te encuentras en la páginas de Blogsperu, los resultados son los ultimos contenidos del blog. Este es un archivo temporal y puede no representar el contenido actual del mismo.

Comparte esta página:

Etiquetas: [clus]  [swug]  [cisco live]  [network insight]  [cisco live us]  
Fecha Publicación: Fri, 08 Jun 2018 15:31:27 GMT

CiscoLive! US ("CLUS") is literally right around the corner, set to open in sunny Orlando in just a couple of days. So it's time for me to run down the things I'm hoping to see and do while I'm hanging with 24,000+ of my closest friends and associates!

 

First, based on the recent enhancements to Network Insight in NPM and NCM, I've got a solid reason to dive deep into Nexus technology and see what treasures are there for me to find. As a monitoring engineer, I find that I often approach new technology "backward" that way--I'm interested in learning more about it once I have the capability to see inside. So now that the world of VDCs, vPCs, PACLs (port-based ACLs), VACLs (VLAN-based ACLs), etc. are open to me, I want to know more about it.

 

And that takes me to the second point. I'm really interested to see the reaction of attendees when we talk about some of the new aspects of our flagship products. The scalability improvements will definitely satisfy folks who have come to our booth year after year talking about their super-sized environments. If folks aren't impressed with the Orion Mapping feature, I think I'll check for a pulse. Orion Service Manager is one of those hidden gems that answers the question "who's monitoring my monitoring?" And by the end of the show, Kevin and I will either have the "Log" song fully harmonized, or our co-workers will have us locked in a closet with duct-tape over our mouths. This, of course, in honor of the new Log Monitor tool (Log Manager ).

 

Something that has become more and more evident, especially with the rise of Cisco DevNet, is the "intersectionality" of monitoring professionals. Once upon a time, we'd go to CiscoLive and talk to folks who cared about monitoring and cared about networks (but didn't care so much about servers, applications, databases, storage, etc.). We'd go to other conventions, such as Microsoft Ignite, and talk about folks who cared about monitoring and cared about applications/servers (but didn't care as much about networks, etc.).  Now, however, the overlap has grown. We talk about virtualization at SQL Saturdays. We discuss networking at Microsoft Ignite. And we talk about application tracing at CiscoLive. Or at least, we've started to. So one of the things I'm curious about is how this trend will continue.

 

Another theory I want to test is the pervasiveness of SDN. I'm seeing more of it "in the wild" and while I believe I understand what's contributing, I'm going to hold that card close to my chest just now until after CiscoLive 2018 is over. We'll see if my theory tests out as true.

 

Believe it or not, I'm excited to talk to as many of the 24,000 attendees as I can. As I wrote recently, meeting people and collecting stories is one of the real privileges of being a Head Geek, and I'm looking forward to finding so many people and stories in one place.

 

On the other side of the convention aisle, I'm also looking forward to hanging out with all my SolarWinds colleagues in an environment where we're not all running from meeting to meeting and trying to catch up during lunch or coffee breaks. Sure, we'll all be talking to folks (if past years are any indication, more or less non-stop). But in those quiet moments before the expo floor opens or when everyone has run off to attend xclasses, we'll all have a chance to re-sync the way that can only be done at conventions like this.

 

Speaking of catching up, there's going to be a SWUG again, and that means I'll get to meet up with SolarWinds users who are local to the area as well as those who traveled in for the convention. SWUGs have become a fertile ground for deep conversations about monitoring, both the challenges and the triumphs. I'm looking forward to hearing about both.

 

And then there's the plain goofy fun stuff. Things like Kilted Monday; folks risking tetanus as they dig through our buckets of buttons for ones they don't have yet (there are three new ones this year, to boot!); roving bands of #SocksOfCLUS enthusiasts; and more.

 

I'm just relieved that my kids are going to lay off the shenanigans this year. They caused quite a stir last year, and I could do without the distraction of mattress-surfing, blowtorch-wielding, chainsaw-swinging teenagers at home.

   

Etiquetas: [monitoring]  [management]  [software defined storage]  [gestaltit]  [software defined networking]  [software defined]  
Fecha Publicación: Thu, 07 Jun 2018 07:04:19 GMT

When it comes to networking specifically, software-defined networking is a model in which the use of programmability allows IT professionals to increase the performance and their ability to accurately monitor the network. This can be also seen in server environments as well. By harnessing the ability to program specific custom modules and applications, users are able to take the standard functions of their systems and drastically increase the range of what they are able to do. These abilities can generally be summarized into three major areas: monitoring, management, and configuration.

 

Monitoring

 

Monitoring networking and other systems performance and uptime is something that admins and engineers alike are no stranger to. In the past, monitoring tools were limited to using ICMP to detect if a system or device was still on the network and accessible. Software-defined IT expands the possibilities of your monitoring. This can be done with a standard, modern monitoring toolset or something that you custom code yourself. Here’s an example. Say you have a web application. It remains accessible via the network with no interruption. A database error pops up, causing the application to crash. But the device itself is still online and responding to ICMP traffic. By using a software-defined mentality, you can program tools to check for an HTTP response code or if a certain block of code loaded on a web application. This information would be far more valuable than simply if a device responded to ICMP traffic. Admins could even program their scripts to restart an application service if the application failed, potentially taking the need for human intervention out of the loop.

 

Management

 

This is an example that I think is already here in a lot of networks today. Take the virtual machine environments that a lot of enterprises utilize. The software systems themselves can manage virtual servers without the need for human intervention. If a physical server becomes overloaded, virtual servers can be moved to another physical host seamlessly based on preset variables such as CPU and memory usage. Using software-defined techniques allows for the management of systems and devices to take place without the need for an admin to A., recognize the issue, and B., respond accordingly. Now, in the time that it would take for an admin to recognize an issue, the system can respond to an issue automatically with preconfigured response actions.

 

Configuration

 

The last category that software-defined techniques can help admins and engineers with is configuration. Here’s another example. Your company is now using a new NTP server for all of your network devices. Normally you would be responsible for logging in to every device and pointing them to the new server. With the modern software-defined networking tools that are available, admins can send the needed commands to each network device with very little interaction needed. Tasks like this that would potentially take hours, depending on the number of devices, can now be done in minutes.

 

The fact is, admins and engineers are still responsible for the ultimate control of everything. The tools that are available cannot run on their own, without some help. During initial configuration and programming of any scripts that are needed, the admins and engineers are responsible for knowing the ins and outs of their systems. Because of this, the common argument that software-defined IT will cut the need for jobs can be easily negated by this fact. Anybody that has configured a software-defined toolset can attest to this. These tools are simply there to streamline and assist with everyday operations.

Etiquetas: [logging]  [ambassador]  [user experience]  [microservices]  [distributed environment]  [gestaltit]  [component behaviour]  [user behaviour]  
Fecha Publicación: Thu, 07 Jun 2018 06:04:17 GMT

In a distributed tracing architecture, we need to define the microservices that work inside it. We also need to distinguish the “component” behavior from the “user” behavior and experience – similar words, but totally different concepts.

We should think of the multitude of microservices that constitute a whole infrastructure. Each of these microservices keeps a trace of what it’s doing (behavior) to provide it to the next microservice, and so on, so that the whole design doesn’t get confused in the middle.

Let's also look at a user behavior: dividing an application in microservices, each of them could modify its behavior according to the habits of the user and improve their experience using the application.

 

Component and user behaviour

Imagine this as a self-learning platform: microservices learn behavior from the user habits and will change behavior according to those user habits.

Let’s imagine at an e-commerce website with its own engine. It will be composed of microservices like product displays, product suggestions, payment management, delivery options, and so on. Each of these microservices will learn from the users browsing the site. So, the microservice proposing suggestions will understand from user input that he’s not interested in an eBook, but instead prefers traditional books. The microservice passes this info to the next microservice that will compose the showcase of traditional books. The user will choose to pay by PayPal, and not credit card, so next time, the microservice will set as default payment option to PayPal and not display credit card options. Last, after the decision of where to deliver the item and how (mail, courier), the related microservices will be activated, with the default address and last used delivery method.

 

Tracing methodology

To achieve this user experience, every microservice must get info from the previous one and send its info to the next one: microservice behavior. This architecture has another benefit too: every action is more agile, not monolithic, because the system will automatically use only the service required. The system does not need to parse and query all the options that it could offer. This is mostly the case of a SQL query: in the previous example, the DB will be split in many tables instead of just few, so the query will be performed only on the smaller table assigned to that particular microservice.

The distributed trace is performed using traces, of course, but also spans. The first one is based on the request received by the microservice from the “previous” module, in a sequential mode, and actively sending this to the next module. A span is a part of that trace and keeps information about every single activity performed in the previous module in a detailed way.

 

User’s experience

Let's look at the user’s experience. Segmenting an application in a multitude of services makes the troubleshooting work simpler. Developers can get a better understanding of which of the services is responsible for a slow response to the user. This allows them to decide if they should be upgrading and working on that service, or maybe recode it. Anomalies and poor performances are quickly spotted and solved. Tracing is much more important in cases of wide architectures, such as microservices working on different sessions or, worse, in different hosts.

We can finally consider the distributed tracing as a “logging” method to offer a better user experience in a shorter time, based either on its component or the user choices behavior.

Etiquetas: [security]  [microsoft]  [apple]  [google]  [amazon]  [github]  [the actuator]  
Fecha Publicación: Wed, 06 Jun 2018 14:33:02 GMT

The big news this week is Microsoft agreeing to purchase GitHub for $7.5 billion USD. Microsoft continues to push into an area where it can be viable for the next 20 years or more. Amazon and Microsoft are slowly cornering the marketing in infrastructure hosting and development, leaving Google and Apple behind.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

What You Should Know About GitHub—And Why Microsoft Is Buying It

A quick summary to help make sense of how and why this deal happened. Seriously, it’s the right move for all parties involved. Microsoft now has the opportunity to position themselves as the number one maker of software development tools for decades to come.

 

Microsoft is now more valuable than Alphabet

In case you did not hear the news, Microsoft is now the third-largest company in terms of market capitalization, trailing only Apple and Amazon. Not bad for a company that can’t make a smartphone. Clearly they must be doing something right.

 

Coca-Cola Suffers Breach at the Hands of Former Employee

Yet another example of a former employee dealing with personal employee data. Corporations need to have strict policies and guidelines on removing access from employees the moment notice is given.

 

Mercedes Gone In 20 Seconds As Thieves Use Keyless Signal Cloning Tech

I’m surprised that this is possible with a late model car. Keyless entry has been around for a long time, and I could understand this vulnerability existing for an older model. But a late model Mercedes should be better than this.

 

An Alphabet spinoff company can cut a home’s energy bills by digging a deep hole in the backyard

I wish our country would spend more time and effort on making homes efficient. I’ve been looking into solar panels for my home, and now I’m looking into geothermal heating and cooling.

 

Woman found guilty of distracted driving despite claiming she was checking the time on her Apple Watch

And yet, she had to check multiple times. I would guess she was using the watch to do more than just check the time. Maybe next time she will check for a decent excuse.

 

Price’s Law: Why Only A Few People Generate Half Of The Results

Similar to Pareto’s Principle, Price’s Law talks about how only a few people are responsible for the majority of the output for the entire group. I am fascinated by this and will continue to look for examples (and counter-examples) in everything.

 

A cheat sheet for you to decipher GitHub comments:

Etiquetas: [ambassador]  [agile]  [agility]  [operations monitoring]  [patch management]  [gestaltit]  
Fecha Publicación: Tue, 05 Jun 2018 14:16:02 GMT

“The price of reliability is the pursuit of the utmost simplicity.” C.A.R. Hoare, Turing Award lecture.

 

Software and computers in general are inherently dynamic and not of a state of stasis. The only way IT, servers, software, or any other thing that has 1s and 0s on it can be perfectly stable is if they exist in a vacuum. If we think about older systems that were offline, we frequently had higher levels of stability--the exchange for that is fewer updates, new features, and longer development and feedback cycles, which meant you could wait years for simple fixes to a relatively basic problem. One of the goals of IT management should always be to keep these forces in check--agility and stability.

 

Agile’s Effect on Enterprise Software

 

Widespread of adoption of Agile frameworks across development organizations has meant that even enterprise focused organizations like Microsoft have shortened release cycles on major products to (in some cases) less than a year, and if you are using cloud services, as short as a month. If you work in an organization that does a lot of custom development, you may be used to daily or even hourly builds of application software. This causes a couple of challenges for traditional IT organizations in supporting new releases of enterprise software like Windows or SQL Server, and also supporting developers in their organization who are employing continuous integration/continuous deployment (CI/CD) methodologies in their organizations.

 

How This Changes Operations

 

First, let’s talk about supporting new releases of enterprise software like operating systems and relational database management systems (RDBMS). I was recently speaking at a conference where I was asked, “How are large enterprises who have general patch management teams supposed to keep up with a monthly patch cycle for all products?” This was a hard answer to deliver, but since the rest of the world has changed, your processes need to be changed along with them. Just like you shifted from physical machines to virtual machines, you need be able to adjust your operations processes to deal with more frequent patching cycles. It’s not just about the new functionality that you are missing out on. The array and depth of security threats means software is patched more frequently than ever, and if you aren’t patching your systems, your systems are vulnerable to threats from both internal and external vectors.

 

How Operations Can Help Dev

 

While as an admin I still get nervous about pushing out patches on the first day, the important thing is to develop a process to apply updates in near real-time to dev/test environments, with automated error checking, to then relatively quickly move the same patches into QA and production environments. If you lack development environments, you can patch your lower priority systems first, before moving on to higher priority systems.

 

Supporting internal applications is a little bit of a different story. As your development teams move to more frequent build processes, you need to maintain infrastructure support for them. One angle for this can be to move to a container-based deployment model--the advantage there is that the developers become responsible for shipping the libraries and other OS requirements their new features require, as they are shipped with the application code. Whatever approach you take, you want to focus on automating your responses to errors that are generated by frequent deployments, and work with your development teams to do smaller releases that allow for easier isolation of errors.

 

Summary

 

The IT world (and the broader world in general) has shifted to a cycle of faster software releases and faster adoption of features. This all means IT operations has to move faster to support both vendor and internally developed applications, which can be a big shift for many legacy IT organizations. Automation, smart administration, and more frequent testing will be how you make this happen in your organization.

Etiquetas: [government]  [it security]  [government_geekspeak]  [hybrid it]  [systems monitoring]  [patch management]  
Fecha Publicación: Tue, 05 Jun 2018 13:39:00 GMT

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

Hybrid IT presents SecOps challenges

 

The Department of Defense (DoD) has long been at the tip of the spear when it comes to successfully melding IT security and operations (SecOps). Over the past few decades, the DoD has shown consistent leadership through a commitment to bringing security awareness into just about every facet of its operations. The growing popularity of hybrid IT poses a challenge to the DoD’s well-honed approach to SecOps.

 

An increasing number of public sector agencies are moving at least some of their services and applications to the cloud while continuing to maintain critical portions of their infrastructures on-site. This migration is hampered by increased security concerns as agency teams grapple with items like the disconcerting concept of relinquishing control of their data to a third party, or documenting a system access list without knowing everyone behind the cloud provider’s infrastructure.

 

Here are five strategies teams can employ to help ensure balance and maintain the DoD’s reputation as a model for SecOps success.

 

Foster an agency-wide commitment to high security standards

 

The secure-by-design concept does not just apply to the creation of software; it must be a value shared by workers throughout the agency. Everyone, from the CIO down, should be trained on the agency’s specific security protocols and committed to upholding the agency’s high security standards.

 

Establish clear visibility into hybrid IT environments

 

Gaining clear visibility into applications and data as they move on- and off-premises is essential. Therefore, agencies should employ next-generation monitoring capabilities that allow SecOps teams to monitor applications wherever they may be. Tools can also be used to help ensure that they have established the appropriate network perimeters and to keep tabs on overall application performance for better quality of service. System and application monitors should be able to provide a complete environmental view to help identify recent and historic trends.

 

Rely on data to identify potential security holes

 

Identifying vulnerabilities requires complete data visualization across all networking components, whether they exist on-site or off. Teams should be able to select different sets of metrics of their choice, and easily view activity spikes or anomalies that correspond to those metrics. A graphical representation of the overlaid data can help pinpoint potential issues that deserve immediate attention.

 

Stay patched and create a software inventory whitelist

 

Software should be routinely updated to fortify it against the latest viruses and vulnerabilities. Ensure that you track the release of your patches, and make certain you have a documented and tested plan and rollout strategy. The ease of an automated patch management system can quickly become your biggest nightmare if you haven’t done proper validation.

 

SecOps teams should also collaborate on the creation of a software inventory whitelist. Teams should carefully research the software that is available to them and create a list of solutions that fit their criteria and agency security parameters. The NIST Guide to Application Whitelisting is a good starting point.

 

Hybrid IT is challenging the DoD to up its admirable SecOps game. The organization will need to make some strategic adjustments to overcome the challenges that hybrid IT poses, but doing so will undoubtedly yield beneficial results. Agencies will be able to reap the many benefits of hybrid IT while also improving their security postures. That is a win/win for both security and operations teams.

 

Find the full article on SIGNAL.

Etiquetas: [automation]  [dba]  [business intelligence]  [analytics]  [serverless]  [data analytics]  [cloud database]  
Fecha Publicación: Thu, 31 May 2018 14:38:26 GMT

In the past year I have earned a couple of certificates from the Microsoft Professional Program. One certificate was in Data Science, the other in Big Data. I’m currently working on a third certificate, this one in Artificial Intelligence.

 

You might be wondering why a database guy would be spending so much time on data science, analytics, and AI. Well, I’ll tell you.

 

The future isn’t in databases, but in the data.

 

Let me explain why.

 

Databases Are Cheap and Plentiful

Take a look at the latest DB-Engines rankings. You will find there are 342 distinct databases listed, 138 of those are relational databases. And I’m not sure that’s a complete list, either. But it should help make my point: you have no idea which one of 342 databases is the right one. It could be none of them. It could be all of them.

 

Sure, you can narrow the list of options by looking at categories. You may know you want a relational, or a key-value pair, or even a graph database. Each category will have multiple options, and it will be up to you to decide which one is the right one.

 

So, a decision is made to go with whatever is easiest. And “easiest” doesn’t always mean “best.” It just means you’ve made a decision that allows the project to move forward.

 

Here’s the fact I want you to understand: Data doesn’t care where or how it is stored. Neither do the people curating the data. Nobody ever stops and says “wait, I can’t use that, it’s stored in JSON.” If they want (or need) the data, they will take it, no matter what format it is stored in to start.

 

And the people curating the data don’t care about endless debates on MAXDOP and NUMA and page splits. They just want their processing to work.

 

And then there is this #hardtruth - It’s often easier to throw hardware at a problem than to talk to the DBA.

 

Technology Trends Over the Past Ten Years

Let’s break down a handful of technology trends over the past ten years. These trends are the technology drivers for the rise of data analytics during that time.

 

Business Intelligence software – The ability to analyze and report on data has become easier with each passing year. The Undisputed King of all business analytics, Excel, is still going strong. Tableau shows no signs of slowing down. PowerBI has burst onto the scene in just the past few years. Data analytics is embedded into just about everything. You can even run R and Python through SQL Server.

 

Real-time analytics – Software such as Hadoop, Spark, and Kafka allow for real-time analytic processing. This has allowed companies to gather quality insights into data at a faster rate than ever before. What used to take weeks or months can now be done in minutes.

 

Data-driven decisions – Companies can use real-time analytics and enhanced BI reporting to build a culture that is truly data-driven. We can move away from “hey, I think I’m right, and I found data to prove me right” to a world of “hey, the data says we should make a change, so let’s make the change and not worry about who was right or wrong.” In other words, we can remove the human factor from decision making, and let the data help guide our decisions instead.

 

Cloud computing – It’s easy to leverage cloud providers such as Microsoft Azure and Amazon Web Services to allocate hardware resources for our data analytic needs. Data warehousing can be achieved on a global scale, with low latency and massive computing power. What once cost millions of dollars to implement can be done for a few hundred dollars and some PowerShell scripts.

 

Technology Trends Over the Next Ten Years

Now, let’s break down a handful of current trends. These are the trends that will affect the data industry for the next ten years.

 

Predictive analytics – Artificial intelligence, machine learning, and deep learning are just starting to become mainstream. AWS is releasing DeepLens this year. Azure Machine Learning makes it easy to deploy predictive web services. Azure Workbench lets you build your own facial recognition program in just a few clicks. It’s never been easier to develop and deploy predictive analytic solutions.

 

DBA as a service – Every company that makes database software (Microsoft, AWS, Google, Oracle, etc.) is actively building automation for common DBA tasks. Performance tuning and monitoring, disaster recovery, high availability, low latency, auto-scaling based upon historical workloads, the list goes on. The current DBA role, where lonely people work in a basement rebuilding indexes, is ending, one page at a time.

 

Serverless functions – Serverless functions are also hip these days. Services such as IFTTT make it easy for a user to configure an automated response to whatever trigger they define. Azure Functions and AWS Lambda are where the hipster programmers hang out, building automated processes to help administrators do more with less.

 

More chatbots – We are starting to see a rise in the number of chatbots available. It won’t be long before you are having a conversation with a chatbot playing the role of a DBA. The only way you’ll know it is a chatbot and not a DBA is because it will be a pleasant conversation for a change. Chatbots are going to put a conversation on top of the automation of the systems underneath. As new people enter the workforce, interaction with chatbots will be seen as the norm.

 

Summary

There is a dearth of people that can analyze data today.

 

That’s the biggest growth opportunity I see for the next ten years. The industry needs people that can collect, curate, and analyze data.

 

We also need people that can build data visualizations. Something more than an unreadable pie chart. But that’s a rant for a different post.

 

We are always going to need an administrator to help keep the lights on. But as time goes, on we will need fewer and fewer of them. This is why I’m advocating a shift for data professionals to start learning more about data analytics.

 

Well, I’m not just advocating it, I’m doing it.

Etiquetas: [hotfix]  [agile]  [features]  [bugs]  [patch management]  [gestaltit]  [upgrade]  [software testing]  [os upgrade]  [softwarerelease]  
Fecha Publicación: Thu, 31 May 2018 06:07:37 GMT

After you’ve installed your new storage systems and migrated your data onto them, life slows down a bit. Freshly installed systems shouldn’t throw any hardware errors in the first stages of their lifecycle, apart from a drive that doesn’t fully realize it’s DOA. Software should be up to date. Maybe you’ll spend a bit more time to fully integrate the systems into your documentation and peripheral systems. Or deal with some of the migration aftermath, where new volumes were sized too small. But otherwise, it should be “business as usual.”

 

That doesn’t mean you can lie back and fall asleep. Storage vendors release new software versions periodically. The interval used to be a couple of releases a year, apart from the new platforms that might have a few extra patches to iron out the early difficulties. But with the AGILE mindset of developers, and the constant drive to squash bugs and add new features, software is now often released monthly. So, should you upgrade or not?

 

If It Ain’t Broke…

One camp will go to great lengths to avoid upgrading storage system software. While the theory of “if it ain’t broke, don’t fix it!” has its merits up to some point, it usually comes from fear. Fear that a software upgrade will go wrong and break something. Let’s be honest though: over time, the gap between your (old) software version and the newer software only becomes bigger. If you don’t feel comfortable with an upgrade path from 4.2.0 to 4.2.3, how does an upgrade path from 4.2.0 to 5.0.1 make you feel? Especially if your system shows you an uptime of 800+ days?

 

On the other hand, there’s no need to rush either. Vendors perform some degree of QA testing on their software, but it's usually a safe move to wait 30-90 days before applying new software to your critical production systems. Try it on a less critical system first, or let the new installs in the field flush out some additional bugs that slipped through the net. Code releases have been revoked more than once, and you don’t want to be hitting any new bugs while patching old bugs.

 

Target and latest revisions

Any respectable storage vendor should at the very least have a release matrix that shows release dates, software versions, adoption rates, and the suggested target release. This information can help you balance “latest features and bugfixes” versus “a few more new bugs that hurt more than the previous fixes.”

 

Again, don’t be lazy and hide behind the target release matrix. Once a new release comes out, check the release notes to see if anything in it applies to your environment. Sometimes it does really make sense to upgrade immediately, like with critical security or stability patches. Often, the system will check for the latest software release and show some sort of alert. In the last couple of months, I’ve seen patches for premature SSD media wear, overheating power supplies that can set fire to your DC, and a boatload of critical security patches. If you keep up-to-date with code and release notes, it doesn’t even take that much time to scroll through the latest fixes and feature additions.

 

One step up, there’s also vendors that look beyond a simple release matrix. They will look at your specific system and configuration, and select the ideal release and hotfixes for your setup. All this will be based on a bunch of data they collect from their systems at customers around the globe. And if you fall behind in upgrades and need intermediate updates, they will even select the ideal intermediate upgrades, blacklisting the ones that don’t fit your environment.

 

How often do you upgrade your storage systems? And what’s your biggest challenge with these upgrades? Let me know in the comments below!

Etiquetas: [disaster]  [amazon]  [alexa]  [the actuator]  [it professional]  [bitcoin]  
Fecha Publicación: Wed, 30 May 2018 14:36:22 GMT

Had a great time in Antwerp last week for Techorama, but it feels good to be home again. Summer weather is here and I'm looking forward to taking the Jeep out for a ride.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

The U.S. military is funding an effort to catch deepfakes and other AI trickery

As technology advances, it becomes easier for anyone to create deepfakes, and the U.S. military isn’t sure what they can do to solve the problem.

 

U.S. Launches Criminal Probe into Bitcoin Price Manipulation

Fake money is a scam, pure and simple. And as is often the case, our laws to protect people lag far behind the advances in technology.

 

Economic Inequality is the Norm, Not the Exception

Interesting analysis on how wealth gets distributed over time.

 

Here's how the Alexa spying scandal could become Amazon's worst nightmare

I’ve talked before about how Amazon and Microsoft deal in trust more than anything else. But this scenario isn’t as much as about trust as it is the equivalent of a butt-dial. Alexa performed a task as designed. We didn’t stop using smartphones because of butt-dials, and we won’t stop using Alexa either.

 

Your Professional Growth Questionnaire

Nice summary of questions to ask yourself as part of a self-review process. I especially like the mention of a 360. I wish more companies looked to conduct similar methods of collecting data regarding employee performance.

 

The Places in the U.S. Where Disaster Strikes Again and Again

"About 90 percent of the total losses across the United States occurred in ZIP codes that contain less than 20 percent of the population." Wonderful data analysis and visualizations. It’s article like this that remind me how data is beautiful. Oh, and where not to live, too.

 

I was able to spend a few hours in Ghent last week, a beautiful city to pass the time:

Etiquetas: [aws]  [ambassador]  [geek speak]  [gestaltit]  [data management]  [cloud technology]  
Fecha Publicación: Wed, 30 May 2018 13:33:57 GMT

Data management is one of the aspects of information technology that is regularly overlooked, but with the explosion of data in recent years and the growth of data only accelerating, organizations need to get a handle on these large amounts of data. With the digital transformation that organizations are going through, being able to filter out worthless data is critical given that many organizations are now unlocking competitive advantages based on the data they are gathering and generating. In addition to the growth of data, adoption of public cloud services makes keeping track of data even more difficult with the sprawl of data across on-prem and public cloud. This poses not only an operational concern, but also a security concern from the standpoint of needing to keep track of what data is where. Data management solutions that provide data analysis and xclassification make this much simpler by providing data about your data to be able to make informed decisions that span capacity planning for data backups to regulatory compliance in regards to data locality.

 

The benefits of managed data management are:

  1. Simplified Deployment -  The data management solutions offered by cloud providers provide a quick and easy way to start getting insight from your data by simplifying the deployment process to leverage either a SaaS solution or fully managed solution which removes much of the heavy lifting of deployment that is common with data management solutions.
  2. Simplified Management - One of the challenges with data insight or big data solutions is the administrative overhead that comes along with managing things like patching or upgrading the software as well as the number of servers associated with the deployment.

 

SaaS Deployment

The following solutions are Software as a Service (SaaS) deployments. This means the data management software company hosts the software for its customers.

 

Veritas Information Map

Veritas Information Map is a SaaS-based multi-cloud data management solution. Information maps provide insights into a company's data both on-prem as well as public clouds such as AWS and Azure.

 

Komprise Intelligent Data Management

Komprise Intelligent Data Management is a SaaS-based data management solution. Komprise leverages existing industry standard protocols for accessing data from a NAS or S3 bucket to provide insights to things such as data access time or who was the last person to access the data. Komprise supports gathering data insights from NFS, CIFS, SMB, Azure, AWS, GCP, and more.

 

Fully Managed

The following solutions are fully managed solutions such that the cloud provider manages your data management platform on your behalf and enables the IT organization to continue to derive valuable insight out of the data without the management hassle.

 

AWS S3 Inventory

The AWS S3 Inventory solution is a simplistic inventory solution of the objects in an AWS S3 bucket along with associated metadata such as the storage xclass or encryption status.

 

AWS S3 Analytics

The AWS S3 Analytics solution is a storage xclass tiering recommendation solution that provides data and insight into moving data between storage xclasses to reduce cost by moving infrequently accessed data to a cheaper storage tier.

 

Azure Storage Analytics

The Azure Storage Analytics solution provides data about various Azure storage solutions such as blob storage, queues, and tables. Storage analytics allows for the creation of charts and graphs to visualize things like data access patterns, including who accessed the data or even where the data was accessed from.

 

Data management can mean a lot of different things to different people given their specific focus for the data, but despite the different use cases, getting actionable insight from data is incredibly valuable and generally difficult. The goal of managed solutions is to help simplify and expedite the return on investment of data analysis.

Etiquetas: [government]  [data center]  [network performance monitoring]  [government_geekspeak]  [network management]  [wireless sensor]  
Fecha Publicación: Tue, 29 May 2018 13:05:00 GMT

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

Here is an interesting article from my colleague, Joe Kim, in which he points out some of the challenges of managing wireless sensor networks.

 

For several years, government network administrators have tried to turn knowledge into action to keep their networks and data centers running optimally and efficiently. For instance, they have adopted automated network monitoring to better manage increasingly complex data centers.

 

Now, a new factor has entered this equation: wireless sensor networks. These networks are composed of spatially distributed, autonomous sensors that monitor physical or environmental conditions within data centers to detect conditions such as sound, temperature, or humidity levels.

 

However, wireless sensor networks can be extraordinarily complex, as they are capable of providing a very large amount of data. This can make it difficult for managers to get an accurate read on the type of information their connected devices are capturing, which in turn can throw into question the effectiveness of an agency’s network monitoring processes. 

 

Fortunately, there are several steps federal IT managers can take to help ease the burden of managing, maintaining, and improving the efficacy of their wireless sensor networks. By following these guidelines, administrators can take the knowledge they receive from their sensor arrays and make it work for their agencies.

 

Establish a baseline for more effective measurement and security

 

Before implementing wireless sensors, managers should first monitor their wireless networks to create a baseline of activity. Only with this data will teams be able to accurately determine whether or not their wireless sensor networks are delivering the desired results.

 

Establishing a baseline allows managers to more easily identify any changes in network activity after their sensors are deployed, which, in turn, provides a true picture of network functionality. Also, a baseline provides a reference point for potential security issues.

 

Set trackable metrics to monitor performance and deliver ROI

 

Following the baseline assessment, administrators should configure trackable metrics to help them get the most out of their wireless sensor networks. For example, bandwidth monitoring that lets managers track usage over time can help them more effectively and efficiently allocate network resources. Watching monthly usage trends can also help teams better plan for future deployments and adjust budgets accordingly.

 

Metrics (along with the initial baseline) also can help agencies achieve measurable results. The goal is to know specifically what is needed from devices so that teams can get the most out of their wireless sensors. With metrics in hand, managers can understand whether or not their deployments are delivering the best return on investment.

 

Apply appropriate network monitoring tools to keep watch over sensor arrays

 

Network monitoring principles should be applied to wireless sensor networks to help ensure that they continue to operate effectively and securely. For instance, network performance and bandwidth monitoring software can be effective at identifying potential network anomalies and problematic usage patterns. These and other tools can also be used to forecast device scalability and threshold alerts, allowing managers to act on the information that sensors are sending out.

 

These tools, along with the other strategies mentioned above, are designed to do one thing: provide knowledge that can be turned into effective action. Managers can use these practices to bridge the gap between the raw data that their sensors are providing and the steps needed to keep their networks and applications running. And there is nothing scary about that.

 

Find the full article on Government Computer News.

Etiquetas: [syslog]  [disaster]  [ambassador]  [snmp traps]  [disaster recovery]  [gestaltit]  
Fecha Publicación: Tue, 29 May 2018 06:03:39 GMT

Disasters come in many forms. I’ve walked in on my daughters when they were younger and doing craft things in their bedroom and said “this is a disaster!” When it comes to serious events though, most people think of natural disasters, like floods or earthquakes. But a disaster can also be defined as an event that has a serious impact on your infrastructure or business operations. It could be any of the following events:

  • Security-related (you may have suffered a major intrusion or breach)
  • Operator error (I’ve seen a DC go dark during generator testing because someone forgot to check the fuel levels)
  • Software faults (there are many horror stories of firmware updates taking out core platforms)

 

So how can SNMP help? SNMP traps, when captured in the right way, can be like a distress signal for your systems. If you’ve spent a bit of time setting up your infrastructure, you’ll hopefully be able to quickly recognise that something has gone wrong in your data centre and begin to assess whether you are indeed in the midst of a disaster. That’s right, you need to take a moment, look at the evidence in front of you, and then decide whether invoking your disaster recovery plan is the right thing to do.

 

Your infrastructure might be sending out a bunch of SNMP traps for a variety of reasons. This could be happening because someone in your operations team has deployed some new kit, or a configuration change is happening on some piece of key infrastructure. It’s important to be able to correlate the information in those SNMP traps with what’s been identified as planned maintenance.

 

Chances are,  if you’re seeing a lot of errors from devices (or perhaps lots of red lights, depending on your monitoring tools), your DC is having some dramas. Those last traps received by your monitoring system are also going to prove useful in identifying what systems were having issues and where you should start looking to troubleshoot. There are a number of different scenarios that play out when disaster strikes, but it’s fair to say that if everything in one DC is complaining that it can’t talk to anything in your other DC, then you have some kind of disaster on your hands.

 

What about syslog? I like syslog because it’s a great way to capture messages from a variety of networked devices and store them in a central location for further analysis. The great thing about this facility is that, when disaster strikes, you’ll (hopefully) have a record of what was happening in your DC when the event occurred. The problem, of course, is that if you only have one DC, and only have your syslog messages going to that DC, it might be tricky to get to that information if your DC becomes a hole in the ground. Like every other system you put into your DC, it’s worth evaluating how important it is and what it will cost you if the system is unavailable.

 

SNMP traps and syslog messages can be of tremendous use in determining whether a serious event has occurred in your DC, and understanding what events (if any) lead up to that event occurring. If you’re on the fence about whether to invest time and resources in deploying SNMP infrastructure and configuring a syslog repository, I heartily recommend you look to leverage these tools in your DC. They’ll likely come in extremely handy, and not just when disaster strikes.

Etiquetas: [manage_applications]  [network monitor]  [gestaltit]  [service application monitor]  
Fecha Publicación: Thu, 24 May 2018 06:04:27 GMT

     When it comes to application performance management, the main focus is the application. That is the main concern for end-users. They do not care about network performance, server performance, or any other metrics that can be measured. They only care about the application they are trying to use. Development teams and server admins are usually the main parties involved in the monitoring and management process for applications, but the reality is that all of this runs over a network. To take performance management to the next level, it only makes sense to bring all parties to the table.

 

First Steps: The Internal Network

 

     When examining the role of the network in an application’s performance, the first step is the internal network. If your application is internal, this may be the only step to focus on for your application. Whether it is east\west in your server and user environments or north\south to your firewalls, network monitoring needs to be involved. Let’s see if this sounds familiar...

 

Application is having issues -> Dev team receives a trouble ticket -> They consult with the server team to find the root cause -> Once all of their options are exhausted, the network team is consulted as the next step.

 

     After this long process, when the network team is finally consulted, they realize a small issue on the uplinking switch. After a quick fix, everything is back to normal. That is where the issue lies for a lot of environments. A tiered approach adds unneeded steps to the troubleshooting process of application performance management. I like to think of it as almost a hub-and-spoke environment with the application being the hub, and each spoke is a supporting team.

 

Hub and Spoke Application Monitoring Structure

 

     By doing this, all parties are included in the process of application performance management. Each of them could be the first alerted party if there is an issue, and address the problem directly. This sets a good base for ensuring uptime and optimal performance for an application.

 

Monitoring External Network Performance

 

     Creating a strong structure to application performance management is the first step. Once this is mastered on the internal network, the network team has an additional step of monitoring the external network performance. This is why it is crucial that the network team is included in monitoring applications. For example, there could be a routing issue between ISPs causing latency for external users accessing your application. Server analytics would show performance within acceptable tolerances and the development team would not see any errors either, yet users could be having a poor experience using the application. If proper steps were taken to monitor the external network, this issue could easily be detected, resolved, and communicated to all affected users. One example of managing the external network is toolsets that allow for multiple endpoints to be used for testing all over the world. Communicated stats could be anything from ping latency to overall bandwidth from all of these different locations.

 

The fact is that utilizing the network team in application performance management is a no-brainer. Reducing troubleshooting and problem resolution times are a couple of things that any technical team could get behind. Next time you are planning a management and monitoring structure, be sure to focus on the network as well as the application itself.

Etiquetas: [automation]  [google]  [internet of things]  [the actuator]  
Fecha Publicación: Wed, 23 May 2018 14:33:27 GMT

I’m in Antwerp this week for Techorama. It’s a wonderful event in a great location, the Kinepolis, a 24-screen theater that can hold about 9,000 people in total. If you are in or around Antwerp this week, stop by and say hello.

 

As always, here are some links from the Intertubz that I hope will hold your interest. Enjoy!

 

One-of-a-Kind Private Train Takes On Florida’s Traffic Nightmare

As much as I love autonomous vehicles, I know that having a modern rail system would be even better for our country. Here’s hoping Florida can get it done and lead the way for others to follow.

 

Are Low-skilled Jobs More Vulnerable to Automation?

For anyone that has ever been involved in a theoretical discussion regarding what jobs automation will replace next. Sometimes the jobs we think are easiest or best for machines are not. And some jobs (like automated index tuning for databases) are a lot easier for a robot than a human.

 

AI and Compute

Interesting analysis into the volume of AI projects in use now compared to six years ago. Consumption appears to double every 3.5 months. This rise in consumption is something a legacy data center would never be able to keep pace with, and is an example of where the cloud shines as an infrastructure provider.

 

Reaching Peak Meeting Efficiency

Yes, meetings are a necessary and important part of corporate life. And no one likes them. It’s time we all underwent some ongoing training on how to make meetings an efficient use of our time.

 

'Sexiest Job' Ignites Talent Wars as Demand for Data Geeks Soars

At first these salaries seem silly. But there is a dearth of people that can analyze data properly in the world. And the value a company gains from such insights makes up for such high salaries. Face it folks, the future isn’t in databases, it’s in the data.

 

The Internet of Trash: IoT Has a Looming E-Waste Problem

Here’s the real garbage collection problem with technology today: billions of IoT devices with short lifespans. Might be wise to invest in companies that specialize in cleanup and recycling of these devices.

 

Uh, Did Google Fake Its Big A.I. Demo?

I think the word “faked” is meant for a clickbait headline here. Chances are Google did some editing to make it presentable. The bigger issue, of course, is just how human the interaction seemed. And that has people more upset than if it was faked entirely.

 

The Techorama Welcome Kit left in my room was a nice touch, and they almost spelled my name correctly:

Etiquetas: [public_cloud]  [government]  [government_geekspeak]  [cloud monitoring tools]  [cloud migration]  
Fecha Publicación: Tue, 22 May 2018 13:24:00 GMT

By Paul Parker, SolarWinds Federal & National Government Chief Technologist

 

Despite ploughing more than £50m into the digital transformation of the NHS, evidence suggests only a handful of trusts have adopted the Government’s “cloud first” policy.

 

NHS Digital spent more than £32m on digital transformation consultancy services, and £23m with cloud, software, and hardware providers between April and December 2017.

 

But, its bosses must be questioning why, when less than a third of NHS trusts surveyed in January have adopted any level of public cloud. Mistrust soars, according to recent findings from IT management software provider, SolarWinds.

 

The research questioned more than 200 NHS trusts and revealed that, while respondents were aware of the government’s policy, less than a third have begun the transition.

 

Of those who have yet to adopt any level of public cloud, 64% cited security concerns, 57% blamed legacy tech, and 52% said budgets were the biggest barriers.

 

However, for respondents who had adopted some public cloud, budget registered as far more of a barrier (66%), followed by security and legacy technology (59% each).

 

Eight percent of NHS trusts not using public cloud admitted they were using 10 or more monitoring tools to try and control their environment, compared to just 5% of NHS trusts with public cloud.

 

In addition, monitoring and managing the public cloud remains an issue, even after adoption, with 49% of trusts with some public cloud struggling to determine suitable workloads for the environment.

 

Other issues included visibility of cloud performance (47%) and protecting and securing cloud data (45%).

 

Six percent of NHS trusts still expect to see no return on investment at all from public cloud adoption.

 

Speaking exclusively to BBH about the findings, SolarWinds’ chief technologist of federal and national government Paul Parker said, “Cloud is this wonderful, ephemeral term that few people know how to put into a solid thought process.

 

“From the survey results, what we have seen is that the whole “cloud first” initiative has no real momentum and no one particularly driving it along.

 

“While there are a lot of organizations trying to get off of legacy technology, and push to modernize architecture, they are missing out.”

 

Improvements to training are vital, he added.

 

“People tend to believe there’s this tremendous return on investment to moving into the cloud when, in reality, you are shifting the cost from capital to operating expenses. It is not necessarily a cost saving in terms of architecture and people need to recognize that. “Rather than owning the infrastructure, with cloud you are leasing it, so it doesn’t change the level of investment much.”

 

There are savings to be had, though, and they come from converging job roles and improving access to medical records, for example.

 

Parker said, “With cloud, you do not need a monitoring team for every piece of architecture, so while there are savings to be had, they are more operational. With a cloud infrastructure, everything is ready at the click of a button.”

 

Moving forward, he says,

 

“It’s that old adage of ‘evolution, not revolution.’ There needs to be technology training, and the NHS needs to have an overarching goal, rather than simply moving to the cloud.

 

“The first thing trusts need to do is look at their current environment and determine what’s there, what’s critical and what’ non-critical. That will enable them to focus on moving the non-critical things into a cloud environment without jeopardizing anyone’s health or life or affecting security. That will help everyone to better understand the benefits of the cloud and to build trust.

 

“ I would also like to see a top-down approach in terms of policy and direction. The ultimate goal of healthcare services is to make sure people have a better quality of life; and the goal of IT is to make sure they can deliver that. As IT experts, it is our job to try and help them to do that and make IT easier and more affordable.”

 

Find the full article on Building Better Healthcare.