Extra Extra read all about it, VMwareTools OpenSSL vulnerabilities!
Update VMwareTools to solve OpenSSL vulnerabilities CVE-2023-3446, CVE-2023-2975. The ‘VMwareTools OpenSSL vulnerabilities’ showed up two (2) weeks ago, but it took about a week for the update to post. Latest Tenable scan article shows OpenSSL update to v3.0.10 required for VMware Tools.
Update VMwareTools
Start with the Security scan and the plugin ID to mitigate ‘Tenable Scan output of OpenSSL PlugIn ID documenting problems’
Tenable Scan output of OpenSSL PlugIn ID documenting problems
Talk with your security team to identify the offending path for guidance on which application might be the culprit. The diagnostic/debug details can be a lifesaver!
Snippet of Tenable OpenSSL path from scan diagnostic of OpenSSL vulnerabilities
Newer version of VMwareTools required to fix OpenSSL vulnerabilities.
Originally, no VMwareTools update posted
VmWare tools v12.6 resolves CVE-2023-3446, CVE-2023-2975. Hopefully your virtualization team uses an Endpoint Manager to manage server configurations, and they have an application/package wrapper to install VMwareTools without this being a manual process
Either way, you’ll have to download the update download link
VmWare tools v12.6 has OpenSSL update to resolve CVE-2023-3446, CVE-2023-2975
The ‘MSSQL Addendum pack’ wouldn’t be possible without Brandon Pires contributions. Brandon dealt with my many questions to better alert! If you need more background, check the ‘why addendum pack’ post.
The pack is based on the SQL engineering blog and program team making multiple updates per year for SQL monitoring. The addendum creates two groups for dev/test and notification/subscription modeling. Second, the overrides, man there are a bunch! aid consumption of real issues. Lastly, most environments should be SQL 2016+, as the 2012R2 EOL/EOSL is quickly approaching in October!
MSSQL groups defined in the Addendum pack
MSSQL group discoveries require updates to be applicable to environment
Tailor addendum
First, the Addendum pack requires the MSSQL packs MUST be installed. The addendum is based on the MSSQL 2016+ version agnostic is currently supported, as the 2012,2012R2 products are near end of support.
Find/Replace the variables as needed:
Example ##TESTSERVER##|##DEVSERVER##
Save file
Overrides
Addendum pack contains discovery, monitor, and rule overrides to tune MSSQL to CSA (old PFE/CE/CSAe Microsoft Field engineer recommendations), to match the health model reducing critical ‘wake me up in the middle of the night’ alerts.
Do you associate StarTrek when the word federation is used inside of federation services (ADFS)?
To begin, the ‘ADFS addendum pack’ needs acknowledgement of the contributors who dealt with my many questions to better alert on AD issues! My thanks to Jason Windisch for his help and expertise with Active Directory Federation Services (ADFS). If you need more background, check the ‘why addendum pack’ post. BTW, what do you associate with the word – Federation?
The Active Directory Federation Services ‘ADFS Addendum pack’ configures ADFS group of related classes for notification/subscription modeling. Second, the rules, service monitors, tasks, service recovery, alert cleanup, and summary reports aid consumption of real issues. Third, if you have ADFS2012R2, I have an addendum pack, but coordination necessary to get the ADFS management packs MSI (not currently available). Lastly, most environments should be 2016+, as the EOL/EOSL is quickly approaching in October!
ADFS Addendum pack creates ADFS Group AND discovery requiring server names applicable to environment.
ADFS Group discovery requires server names applicable to environment
Tailoring the pack(s) to your environment
First, the Active Directory Federation Services management packs MUST be installed for the ‘ADFS Addendum pack’ to load. 2016+ agnostic is currently supported, as the 2012,2012R2 products are near end of support.
Find/Replace the variables as needed
##ADFSSERVERNAME1##|##ADFSSERVERNAME1##|##LAB##
Save file
Workflows
First, the DataSources (DS) and WriteActions (WA) clean up alerts, create daily reports, where the WA are the on-demand tasks versions.
Data source (DS) scheduled workflows run weekdays between 0600-0700 local SCOM management server local time. The summary and team reports (run during this time) summarize key insights. NOTE: the Monday report gathers the last 72 hours, so administrators get a ‘what happened over the weekend’ view. Tuesday-Friday reports are past 24 hours. Lastly, the group policy report summarizing unique GPUpdate error output.
Monitoring
ADFS Monitoring components screenshot from Notepad++
Addendum pack rules schedule data source execution, add on-demand tasks. The service monitor, and Recovery tasks add service recovery automation to bring us to the ‘manual intervention required’ alerting. There are a few monitor/rule overrides to match the health model.
Import
Download updated ‘ADFS addendum pack’ and save to your environment
Active Directory monitoring – definitely needs an addendum!
To begin, the ‘ADDS addendum pack’ needs acknowledgement of the contributors who dealt with my many questions to better alert on AD issues! My thanks to Bob Williams, Vance Cozier, Jason Windisch for their help and expertise with Active Directory (AD/ADDS). If you need more background, check the why addendum pack post.
The Active Directory ADDS Addendum pack(s) change how Tier0 health, and Domain Admins consume alerts. Then, AD product team re-wrote the packs back in 2016 to PowerShell workflows. Many workflows measuring replication, health of your forest(s), at the cost of less alert noise than the 2008 packs. Third, the addendums for 2012, 2012R2, and 2016+ version agnostic should help reduce alert ‘burden’. Lastly, most environments should be 2016+, as the EOL/EOSL is quickly approaching in October!
Workflows
First, the DataSources (DS) and WriteActions (WA) clean up AD pack alerts, create daily reports, team, and AD pack summary alerts, where the WA are the on-demand tasks versions.
DataSources (DS) and WriteActions (WA) clean up AD pack alerts, create daily reports, team, and AD pack summary alerts, and the WA are the on-demand tasks versions of the DS
Data source (DS) scheduled workflows run weekdays between 0600-0700 local SCOM management server local time. The summary and team reports (run during this time) summarize key insights. NOTE: the Monday report gathers the last 72 hours, so administrators get a ‘what happened over the weekend’ view. Tuesday-Friday reports are past 24 hours. Lastly, the group policy report summarizing unique GPUpdate error output.
Monitoring
ADDS monitoring snapshot showing rules, tasks, recoveries with added capabilities
Addendum pack rules schedule data source execution, adding on-demand task alerts, including new group policy rule alerts. The Recovery tasks add service recovery automation to bring us to the ‘manual intervention required’ alerting. There are a few monitor/rule overrides to match the health model. NOTE: The 2012R2 pack is missing the component alert, as there’s less than 2 months until the platform support ends.
The component alert is a new workflow that’s helped Tier0 admins.
Basically, this is a PowerShell workflow that checks SCOM alerts for multiple DC alerts to determine DC health. I don’t change the AD critical service monitors, but simply summarize the alerts to tell you when intervention is required.
Tailoring the pack(s) to your environment
First, the Active Directory Domain Services management packs MUST be installed for the ‘ADDS Addendum pack'(s) to load. The three versions currently supported have addendums, hopefully 2012,2012R2 are planned to be decommissioned in the short term.
Update the AD summary and team reports
The AD summary and team reports for specific Tier0 servers owned by Domain Administrators, AD Team (or any other aliases the SME’s may go by) group regular expressions.
In your favorite XML editor (mine is Notepad++), open the addendum pack(s), and find/replace for the following strings:
If only certificates were all gift certificates! The ‘ADCS Addendum packs’ disables noisy rules, adds OCSP seed, OCSP responder and OCSP group (classes). Recovery and service monitoring and nCipher event are the main highlights reducing alerts for ADCS 2012,2012R2,2016+. My thanks to Bob Williams CSA, for the assist!
The ADCS Addendum packs discover OCSP (seed class), and OCSP responder registry keys installed on monitored servers.
OCSP seed class
Group discovery tailors OCSP classes, for subscription or alert tuning.
OCSP server group can be used for subscription, or alert tuning (depending on class targets)
Monitors and service recoveries keep OCSP services monitored, and only alert when manual intervention is required.
OCSP service, certsvc monitors and service recovery automations built in
Tailoring the pack(s) to your environment
First, you must have at least ONE (1) set of ADCS Active Directory Certificate Services management packs so the ‘ADCS Addendum pack’ will load. The three versions currently supported have addendums, hopefully 2012,2012R2 are planned to be decommissioned in the short term.
Second, if you don’t have OCSP in your environment, download, and then import into your environment –
ELSE
Update the ‘OCSP Responder’ server name(s) for the group regular expressions.
Update the ‘OCSP Responder’ server name(s) for the group regular expressions.
In your favorite XML editor (mine is Notepad++), open the addendum pack(s), and find/replace for the following strings:
IT Ninja required for improving monitoring hence ‘Why addendum packs’
‘Why addendum packs’? What value can they bring to my customer? Kevin Holman started the Addendum thought process quite a while back. Added functionality to a core application/program/product. The first example of this pack naming convention is his SQL RunAs Addendum to simplify SQL monitoring. Let’s break down a number of examples how the SCOM community has built packs to better monitoring, and how I believe the addendum packs bring IT Ninja lessons from Microsoft experts monitoring to your environment.
Why Addendum packs
Better monitoring from the experts, including customer examples for other ‘blind spots’ in monitoring. Blind spots consist of ‘not monitored’ pieces of infrastructure, from simply an event, ping, service, tcp port check, process, web site, scripted workflow, with the purpose to identify a problem.
The goal of monitoring is to:
Identify, self-heal, automatically run recovery or diagnostic workflows alert when manual intervention is required. Doesn’t matter what tool you use, they all do some portion of these steps.
The addendum packs do these things, adding a few differentiators.
Auto closure daily scripts (close rules/monitors)
Auto reports of problems (M-F 0600-0700 local, reflecting last 24-72 hours of open/closed alerts)
Employ count logic (x in y time)
Self-heal monitors with no new events
Adjust alert severities to health model
where critical (red) = outage, warning (yellow) = issue, informational reports or FYI’s
Capable of updating alerts (status, owner, ticketID+)
Tasks to run workflows on-demand
Recovery tasks – (i.e. service restart automation or TopProcess, Logical disk cleanup, MECM Client cache clean )
Data from StarTrek the next generation – Mr. Tricorder makes me laugh!
‘AD Application monitoring’ > web synthetics, artificial users > android what image comes to mind? Is it a person, or a thing from a Sci-Fi movie? Perhaps Bishop from Aliens, Data from Star Trek. What does ‘AD Application monitoring’ consist of? Currently that means a CRL validity check, and ADFS web synthetic (proving that ADFS is responding). My thanks to Jason Windisch CSA, for the supplied PowerShell!
The purpose of the pack is to add scheduled workflow that acts like the user, identifies if the CRL’s are about to expire. Most times, monitoring stops at ICMP ping. Most times, there’s still an outage, as the network, and servers are responding. The next layer is IIS, Apache, etc. Sometimes the network team gets involved, checking a base IIS URL is configured. Most outages aren’t network, nor IIS wasn’t running. This is why we focus on the web application responding. Does the multi-prong tactical attack make sense?
This pack delivers on-demand tasks, daily reports, and rules/monitors to reflect health. Customize the watcher node, some URL’s, save, and import into SCOM! The purpose
Assign watcher node(s)
Assign a watcher node by creating a registry key.
What does that mean? Watcher nodes are needed to provide user perspective.
Multiple site example
Issue: Users from sites 1,2,3 are having problems accessing web pages. To understand a user in site 2, leverage a server in site 2 to initiate the web request (invoke-webRequest in PowerShell).
Why: Differentiate user experience (per site). Answer the ‘did you know’ – is the application responding from this site/perspective.
Unfortunately, the watcher node concept eludes most administrators. Mastering ‘user perspective’ makes for an invaluable aid moving from reactive ‘fire fighting’ to proactively being told before users. Hopefully this explains the power where monitoring imitates user interactions for key web applications.
How: Create registry key on whatever servers you want to initiate web monitor
From PowerShell (as Admin), or Command Prompt (as admin)
Example of XML snippet from AD Applications management pack
AD Applications Watcher Node – create specific registry key
Set up CRL Validity check and ADFS synthetic
Next, configure the URL’s for the customer environment for the ‘AD Application monitoring’ management pack.
Update AD Applications module types for monitor/rules for CRL and ADFS synthetics
Configure the CRL validity check array
From your favorite XML editor (notepad++ pictured)
Find/Replace ##FQDN##, ##CRLstring##, numbers to customer environment
CRL Validity check, create your array length as needed for customer environment
Configure the ADFS synthetic request(s)
From your favorite XML editor (notepad++ pictured)
Find/Replace $server, ##FederationFQDN##, if necessary, update ADFS URL string if different (the /adfs/ls/idpiniatedsignon.aspx portion) to customer environment
Update ADFS URL for invoke-webRequest, ADFS default URL in specified example
Proactive Analyst Reports as a new way to ingest key insights from SCOM
As a SME or team lead, ever need to know a key insight for the enclave? Let’s talk about the ‘Proactive Daily Reports’ pack. This provides you some built-in reports on what transpired in an enclave. Building again on the Health pillar, we can simplify what owners need to see. Creating a PowerShell script was a simpler alternative to a complex SSRS report that often broke due to patching, and not following best practices. The pack shows a simpler way to bring key insights to owners for Pending Reboots, Expiring PKI certificates, Logical Disk alerts, System Admin summary, and SCOM admin reports including long-running scripts, script errors, SCOM errors, and alert updates report.
Let’s start with some example reports – examples for expiring certificates, Logical Disk, Pending Reboot, System Admin summary, and SCOM admin reports including long-running scripts, script errors, SCOM errors, and alert updates report.
Expiring Certs –
About to expire certificates
Expiring PKI certificates reports
Logical disk alerts –
Shows Server, drive, and % full data
Logical disk alerts report, showing zero for the past 72 hours (over a weekend)
Alerts of servers pending restart, not patched, not rebooted
Pending reboot report lists servers pending restart, not patched, not rebooted alerts
System Admin summary
This is really a consolidation of multiple insights:
Server performance issues
Open ITSM/Remedy tickets
Unhealthy Agents
Pending Reboot, Not Rebooted, Not patched
Disabled/Unhealthy/MaintenanceMode, Repeatedly down Agents
Logical Disk free space alerts
Expiring certificates
AD DC (ADDS) critical alerts
DNS alerts
Group Policy issues
SysAdmin daily summary report example alert
SCOM admin reports
Admin reports have a few separate alert reports, including long-running scripts, script errors, SCOM errors, and alert updates report.
SCOM Admin alerts report example of common SCOM problems
Long running scripts
SCOM Admin long running scripts alerts report example of long-running report workflows to help tune run-time
ScriptErrors showing key SCOM connectivity issues
SCOM Admin script errors to help diagnose report script syntax errors
Useful links
Other blog posts for addendum management packs and capabilities –
As a SME or team lead, ever need to know ‘Proactive Patching alerts’? i.e. What servers need patches applied, aren’t patching, or were missed? This pack builds on three (3) pillars – Health/Security/Compliance, enabling Cyber teams and more. This became an alternate option to a complex pack, with SSRS report, used by a customer to identify systems. The report was long, and had many blank lines/pages, which required a re-write. This pack started with the pending restart monitor directly from the AquilaWeb reboot pack logic. The logic helps SysAdmin/Domain Admin/NOC/NOSC/SOC teams to know when servers need reboots. This need is driven further due to multiple reboots (sometimes) required with Windows monthly updates, and Application updates. Used across multiple customers, this is the first pack enabling a proactive stance to answer the ‘Am I compliant’ question.
David Allen built the ‘Aquilaweb.Support.PendingReboot.Monitor.PendingReboot’ PowerShell monitor, to tell system owners when the pending restart flag was present. Some builds though, make system changes which repeatedly flip the registry key, causing many alerts. Also, downloading the Aquila pack is a trick, as TechNet was retired.
David provided a great idea, which was built upon. This gave rise to the question of, what if the server was not patched, or not rebooted in a period of time? With my Cyber hat on, this became the next piece of content to create. That gave rise to another question – do these scenarios need to reflect in health (monitor), or not (rule)? We’re all about choices, free will, so the pack is built with those options (rules disabled out of the box).
Pending restart monitor XML showing options
The pack is setup to alert with CBS application updates, SCCM/MECM/Config Mgr Endpoint Management updates, and Windows Updates. This has been my experience for the most accurate reflections of alerts on secure builds where Application/System Owner needs to take action.
Last Patch and Last Reboot monitor/rules in the download, are set to 45 days. Tune this value down, if patching occurs at the 30 day mark, increase if you need more time before alerts.
Last Patch Monitor reflecting number of days
Otherwise, download and import into your environment. Depending on your subscription/notification settings, the Proactive set of alerts are built upon the Windows Operating System class. If subscriptions include the class, the notifications are automatic to System/Application owners.
Task Manager output for ‘Top Process PowerShell script management pack’
Ever wish you had task manager output when you had a monitor go unhealthy? Following Kevin Holman’s lead to ‘Monitor Processes‘, the idea landed to build out the ‘Top Process PowerShell script’. This morphed into a management pack with Knowledge entries to better explain what is being done. Integrating Top Process into Health Explorer output as a recovery task helped provide another step before alerting. The idea started from the need to prove which Security tool(s) were causing the over-utilized compute spikes, causing non-responsive server(s). Thinking back to my UNIX days, we simply used top, vmstat, iostat, and other commands to identify problematic processes. Integrating PowerShell scripts into SCOM is part of the fun, then linking the obfuscated Security processes for the final output. From there, extrapolate into Azure Functions or Azure Logic apps, for additional functionality for cloud native monitoring.
Kevin Holman built a ‘ Monitor.Performance.ConsecSamples.ThenScript.TwoState.mpx fragment, beginning the logical journey. His fragment helped me start with a working model, taking processes and cores into consideration for true CPU usage on multi-core servers.
Kevin Holman Monitor performance then script fragment for PowerShell get-counter syntax
We need to see the processes, and their corresponding value, then build an output table (custom object). After gathering the processes, feed the TopProcesses array, lastly sorting the array for CPUValue
Top Process memory usage snippet
Next, we’ll want to see what applications/tools might be involved, including Active Client, IIS, monitoring, and EndPoint Management tools (keep things honest!).
Added the Security Processes into the mix
Then we build an output of the data so we can take the datasource (DS) or WriteAction (WA) into a scripted monitor/rule, or recovery tasks linked to various monitors. Even built a forked version in case of SAW/Red Forest, separating Tier0 monitoring from Tier1 (snippet below is NOT that pack)
snippet of manual tasks and recoveries that link to multiple monitors
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.