Sep 12, 2016 - 09:49 AM
Well, to the the defense of those 47%, the other 52% might not have answered truthfully ;)
In fairness to all, the word 'accuracy' tends to be a bit misused in context to an inventory count. An inventory is really a 'census' under the assumption that you intend to record all members of the population, and thus 'accuracy' is really a participation rate.
It may be that some of the respondants were responding under the assumption that - maybe - all of the assets 'found' in an inventory may not represent all the assets possible ( remote servers, desktops in the closed, laptops in the MD's dining room, etc.) You don't know what you don't know.
As well, the rate/accuracy of an inventory may not have to be 99.98% based upon the *reason* for the inventory in the first place. A hospital doing a SAM audit will typcially NOT audit the laptops being used by healthcare workers or used by operational medical equipment. That being said, they don't really have to (those devices are locked down and imaged). But the other non-patient facing devices may have to participate.
Aa security inventory may have a higher requirement of accuracy/participation rate ( and even data element requirement like 'location') than -say - a SAM review. Thus, the reason for the review ( security, ITSM, SAM, etc.) will affect what participation rate is acceptable.
The business 'topology' will also affect the ability to achieve high 'accuracy' as some assets are non-networked (Slave-servers to machinery), in network pockets ( branch offices) or in a DMZ ( core-business servers)
So, the 47% who said 'no/unsure' could have said so due to a constraint in accessing the assets, rather than a lack of methodology.
Lastly, 'accuracy' may have a different value in context to the AssetType. In your example, you achieved 99.98% accuracy of your 16,000 devices. If those missing 32 assets are desktops, that's an oversight. If those 32 assets are servers, sound the alarm!
So, this manfesto doesn't answer your question...but I'm hoping to put into perspective that those 47% weren't necessarily doing a substandard job. It was a very simple question to a complex situation.
Sep 12, 2016 - 02:35 PM
Going by the question (1), it seems like you are interested only in the client environment with Laptop/Desktop/Workstation (hopefully minus byod) and no vendors on-prem or in the NW. I don't think it is feasible to mark up 99.98% accuracy in a client environment simply because they cannot be discovered at all times especially if you throw in Laptop, thin clients (not workstations/desktops). Anyway, more details would be needed to break down and evaluate. It also makes me wonder, if your organization is spending money only on wall-to-wall physical reconciliation at the end of the year. May be, i am wrong. I am interested to know more about your spread of devices and to what others might add in.
Sep 13, 2016 - 12:45 AM
Logically segment your hardware into different groups. Call these ProductCategories or ProdCats.
Each ProdCat needs to be handled slightly differently.
For servers, you need to do a very rigorous physical audit as well as tie the process back to ensuring they are reporting (if possible)
For desktops, I would suggest doing a 50 computer sample every month. Meaning you pick an area, do a manual audit and compare that to your system of record (probably a CMDB). You will find missing or mis-identified assets. Correct them but keep a tally of the number of incorrect items. Use that as a metric to help drive consistency. (Ps make sure they audit random sample... Doing the same 50 machines isn't helpful).
For mobile, just don't let anything not on your MDM connect to wireless in your building. That will get people to request access and will help them provide you with accurate data.
Anyways, you get the point and good luck.
Sep 16, 2016 - 03:15 AM
You need to establish a measure of quality that is appropriate to the asset that takes into account vendor, asset value, audit risk, impact of compliance gap, dependecies, etc. With this in mind I recommend the follow level of quality in scans:
90% of all desktops every 90 days
90% of all servers OS every 30 days
100%, all, virtual hosts (vCentre, HMC, LPAR, etc) every 30 days.
100% of servers that are makred for sub-capacity (ILMT) every 90 days
90% of all agents (e.g. SCCM or BMC) responding every 30 days
90% of all databases (Oracle, IBM and MS SQL) every 90 days
75% of all non-database products from your top 20 vendors for products on servers.
You are not going to get to these levels on the first pass but you should be aspiring to this quality level over a 6-12 month period and then keeping it there.
Sep 21, 2016 - 11:49 PM
this score it highly overated even though important. A better metric is to be able to have a software accuracy based on your contract metrics that are measure on a regular basis (depends on your company volatibilty) but at least before every renewal (recomend every quarter).
Complemented with software and hardware that are not mapped into contracts, filtered by not relevant (e.g., the fact that I scan 95% but the 5% are risk of compliance and security/ cost relevant) what's the point?
Hope it helps!
Um abraço from Portugal
Sep 28, 2016 - 02:42 AM
Jun 07, 2017 - 02:28 AM
Just wanted to revive this topic. I'm looking to implement some SAM specific metrics at my organization including software inventory accuracy. Couple questions...
1. What's a good way to measure software inventory accuracy?
My thought at first was to do a manual check of a small sample of devices and compare the software install to whats being reported in our tool - Add/Remove Programs vs. SCCM, for example. The software audited would be limited to our largest vendors. Still, the problem with this method is that, obviously, its very manual - especially if you're doing it on a monthly basis. Another possible method would be tool vs. tool - MAP vs. SCCM, for example, but this presents other challenges.
Does anyone have any other methods they've used to measure software inventory accuracy?
2. Once we determine how to measure accuracy, what should the SLA or KPI be? Is there an industry standard for how accurate your software inventory should be?
Any info would be helpful. Thanks!