Archive for the ‘Uncategorized’ Category

Veeam repository recommendation

August 11th, 2017 No comments

Updated this post from 2015 for 2017 prices and other updates.

I repost this so much on reddit I decided to just create an entry here to reference:

This is what I recommend if you want cheap, without support, but with decent reliability/redundancy and excellent performance. Use RAID 6 for capacity, RAID 10 for highest reliability and performance. Deduplication will increase your available space by 25-35% or more depending on what you are storing. Increase number of disks and JBODs for more storage.

This method requires a dedicated server to provide NFS, however I think iSCSI is built into 2012r2 and 2016 now. It has the advantage of being able to house Veeam also, though you should use at least one VM as a proxy for a hot add disk performance boost.  Note SuperMicro also offers storage enclosures with a server motherboard, etc, but they don’t have the high disk bay count that this enclosure does.

JBOD enclosure with space for 45 SAS drives. Use any server you have laying around that supports PCIx2 and install Server 2012r2/2016 with deduplication enabled.  This approach is also favorable with Linux variants like FreeNAS, but verify compatibility with the RAID controller before proceeding.

JBOD chassis (1x) – $2499 SuperMicro CSE-847E26-RJBOD1
(At the time of this post CDW had more favorable prices on this enclosure than NewEgg or Amazon.)

SAS RAID controller (1x) – $310 Avago/LSI 9280-8e

RAID controller backup battery (1x) – $165 MegaRAID LSIiBBU08

SAS cables (2x) – $58ea=$116 SFF-8088(M) to SFF-8088(M)

Disks (??x) – $191ea=? Seagate ST4000NM0023 4tb Enterprise Capacity 128mb 7200rpm
(This was my spec when I purchased in 2015, obviously higher capacity versions exist.  Make sure to purchase SAS drives, higher the warranty term and RPM the better)

Internal mini SAS cables (2x) = $68


Setting up the JBOD enclosure cabling can be a little difficult, this review from Amazon was very helpful for me:

on February 23, 2014

I just completed a ZFS on Linux deployment and am very impressed with the results. There is no better deal than a setup like this: very inexpensive with excellent performance. The components were a Supermicro 847 45 drive 4U chasis, an LSI 9200-8e external SAS card, 2 Monoprice 2M SFF-8088 cables, 10 Hitachi Ultrastar 4TB 7K4000 SAS enterprise drives, and a SanDisk Extreme II 480GB SSD (as high speed L2ARC cache and ZIL). Despite running raidz2 in an 8 drive (+2 hot spares) configuration, I have read speeds of 760 MB/s and write speeds of 330 MB/s (on a Dell PowerEdge R610). I have complete confidence that this performance will scale up to saturate the SAS link with read/write speeds of 1 GB/s as I add in more drives, matching performance of my other (much more expensive, commercially sourced) disk arrays. The content on these disk arrays is being served over NFS via Intel 10 Gigabit Ethernet cards with read speeds to RAM on the clients that are in the 500 MB/s range. The entire setup cost less than 6k for 40TB raw capacity; it’s beautiful. Total hardware setup time was about 4 hours one afternoon with two people.

This JBOD array is very nice. It has 24 disks in the front and 21 in the rear each with their own redundant dual-SAS expander backplane. It has tons of fans in the center of the box, each easily detachable if any should fail. There are four SFF-8088 connectors in the rear and, aside from redundant 1400W power, that is the only connectivity this JBOD has. The unit ships without any of those SAS connectors wired up, so you have to open the box and route things as desired. Particularly since this is a dual-SAS expander backplane on both backplanes (for redundant data paths) and also has auxiliary input connections for nearly-double SAS bandwidth, there are quite a few choices on how to set things up. Further, if so desired, you could even wire up each of the backplanes independently and have two entirely separate disk arrays (one in the front and one in the rear) all in one unit. It’s just a matter of how you choose to wire up the backplanes. Check appendix C/D of the manual for diagrams and more information. The tech support at Supermicro are also very helpful and knowledgeable, but I had a bit of a hold time (10-15 minutes).

Since the SAS routing is the most complicated thing to understand with this unit, let me go into more detail. Each bank of disks (24 front/21 back) has its own redundant dual-SAS backplane. There is another slightly cheaper model that doesn’t have the redundant backplane chip/SAS connectors wired in, but the price difference isn’t all that significant. In the front, each redundant SAS port expander has three connections: primary, auxiliary, and pass-through. Since this is a redundant SAS backplane, there are a total of six SAS connections on the backplane, so be careful, it can be easy to get confused. Primary and auxiliary are used for connecting to the front bank, and pass-through is used for chaining out to the rear backplane. If you use both primary and auxiliary connections, you can get nearly double the SAS bandwidth out of your front array since they are dedicated routes. The rear backplane has a similar set of connections, but lacks an auxiliary port, and has only primary and pass-through. With redundancy, this is a total of four SAS connectors. All this connectivity is amazing, but you only get to route four SAS connectors to the outside of your unit unless you want to leave the lid open or drill out into the side (which is quite doable), so you have to choose a configuration. You sadly can’t expose all ten SAS connectors, although that would have been truly awesome.

A couple things to note are that the redundant dual-SAS backplane functionality only works with SAS drives, so don’t populate this with SATA drives if that’s what you want (this is just a fact of the protocols, nothing specific to this unit). This concept also holds if you are daisy-chaining the rear backplane to the front backplane. You’ll also want to populate with SAS drives in that case too because SATA doesn’t do well with daisy-chained SAS expansion sets. I wasn’t planning on either of those configurations though and went with SATA drives because they’re a bit faster than their SAS equivalents. I’ve only populated less than one half of the front backplane so far and already am very impressed.

Installation was pretty simple once you decipher how the included rails are supposed to be setup. Everything snapped into place with super smooth sliding rails. It is a pretty heavy beast though, you will want a dolly to roll it into the server room with and a friend/colleague to help you with sliding it in. At around 70-80 pounds, it’s too much for one person to carry, but it was no problem for two people to install. It’s somewhat amazing to get this high a drive density in a 4U package, but Supermicro pulled it off very well. I now have years of expandability for my array at a fraction of the cost of commercially-prepared systems. If you have any hesitations about this system, I’d cast them aside. I’ve had two of these monsters deployed for three years already without a single hiccup. This third one was the first disk array I purchased piece-by-piece myself. Definitely the right move.

Categories: Uncategorized Tags:

VMWare SRM – When trying to protect a VM – There are not enough licenses installed to perform the operation

April 6th, 2016 No comments

So there were plenty of licenses, what else can be wrong?


Within the log files @ C:\ProgramData\VMware\VMware vCenter Site Recovery Manager\Logs I saw the following:

2016-04-06T15:38:31.479-05:00 [09472 warning ‘Licensing’] Unable to decode license ”: INVALID_SERIAL
2016-04-06T15:38:31.480-05:00 [07916 info ‘Licensing’] Initializing with license key:
2016-04-06T15:38:31.480-05:00 [07916 verbose ‘PropertyProvider’] RecordOp ASSIGN: asset, DrLicenseManager
2016-04-06T15:38:31.480-05:00 [07916 warning ‘Licensing’] The license key ” expired on 1970-01-01T00:00:00Z
2016-04-06T15:38:31.481-05:00 [09240 warning ‘Licensing’] This SRM instance is no longer in compliance. 41 4(s) are not licensed for protection.

In the web client under home > licensing > solutions I found something that didn’t exactly refer to SRM but I assigned the SRM key to it anyways.  After assigning this key the problem was resolved.

Categories: Uncategorized Tags:

Return the X-Frame-Options HTTP header in IIS 7 for Exchange OWA

December 18th, 2015 No comments

To prevent click-jacking, add the HTTP response header “X-Frame-Options” into IIS for websites and or Exchange OWA:

– Open IIS Manager and click on the server name in the left column.  Drill down if you only want to apply to one website.
– In Features View, double-click HTTP Response Headers.
– On the HTTP Response Headers page, in the Actions pane, click Add.
– In the Add Custom HTTP Response Header dialog box, add a header called “X-FRAME-OPTIONS”, and assign it’s value to “SAMEORIGIN”.
– Click OK


You can validate correct function by visiting one of these websites:

Categories: Uncategorized Tags:

Server Connection: Not Connected to SRM server

December 17th, 2015 3 comments

I was unable to find this problem documented anywhere, though there was a reference to it on another blog here:


The problem presents itself this way, looking at SRM in the web client in version 5.5 of VMWare, 5.8.1 of SRM:


As you can see, client connection shows as connected, however server connection shows as “Not Connected to SRM server”.  It wasn’t obvious to me, but what this means is the sites are not connecting to each other, even though they are paired and everything else looks green.

Additionally you will notice that the option to replicate changes to the secondary site before failover will be grayed out.

I spent several days troubleshooting this before I found an indicator in the logs that pointed to certificate errors.  I believe that if I was able to un-pair and then re-pair the sites, this would have been resolved.  However in order to un pair sites, you must first delete the recovery plans and protection groups.  When attempting to delete, the status would say deleting and never complete.

Ultimately to resolve I uninstalled SRM at both sites, deleting all data from database.  I then reinstalled and reconfigured SRM, protection groups, and recovery plans.


Categories: Uncategorized Tags:

Recommended extensions to block @ spam filter

December 10th, 2015 No comments


Additionally you may consider scanning these closer, quarantining, or blocking:
*.rar (block any that are encrypted/can not be scanned)
*.zip (block any that are encrypted/can not be scanned)
*.pdf (block any that are encrypted/can not be scanned)
*.xlsm (macro enabled xls)
*.docm (macro enabled docs)
*.doc (block any that are macro enabled if possible)

Categories: Uncategorized Tags:

Phishing test providers I recommend

August 6th, 2015 No comments
  • phishingbox
  • threatsim
  • wombat security
  • knowbe4

If training isn’t important to you, go with phishingbox. They are the cheapest.

If training is important to you, go with ThreatSim or Wombat Security.

I find knowbe4’s training materials to be meh, but that may just be me.

My personal recommendation is ThreatSim. Their training is lagging behind, but their support is beyond excellent. I suspect they will become a major player in a year from now.

Edit: ThreatSim has been acquired by Wombat Security – this will likely increase the cost of ThreatSim in 2016

Categories: Uncategorized Tags:

Spam filtering techniques

January 30th, 2015 No comments

The most significant things I’ve done to decrease spam and phishing attempts

  • and RBLs
  • vendor RBL (barracuda)
  • blocked entire subnets of countries we don’t do business with
  • email rate control
  • attachment filters
  • virus filter
  • heuristics
  • subject line filters for cryptowall attempts and multi ip distributed campaigns
  • block some foreign countries if their reverse DNS resolves back to their country TLD (ex: cn = china), however I don’t block if reverse DNS rules don’t exist or are incorrect
  • block TLDs in header and body that are heavily abused (list below)

Heavily abused TLDs

Categories: Uncategorized Tags:

MS14-025/KB2928120: An Update for Group Policy Preferences

May 15th, 2014 No comments

Looking at this article:

I grabbed the check script from here (bottom of the page)

and ran it on my domain controller.  The script immediately gave me the error “cannot bind to argument to parameter ‘path’ because it is null”.

Apparently this is an uncaught exception when no XML files exist in the path subfolders.  It appears that ONLY group policy preferences are stored in XML, and this XML file will only show up if group policy preferences are implemented, meaning if you don’t have XML files in %windir%\SYSVOL\domain then you are not affected by this patch.  Group policies themselves appear to be stored as INF and other types.


Categories: Uncategorized Tags:

Bringing a single domain controller up in an isolated network

May 14th, 2014 No comments


I wanted to create a quick test lab so I spun up a copy of a virtualized domain controller into an isolated network. The domain controller came up in a failed state with DNS and Active Directory non-functional.

Apparently in a multi domain controller network it is a requirement that the domain controller be able to sync with other domain controllers/role masters in order to function.

Because this was the only domain controller in the network, and I wanted to get the test network up quickly, I performed the following workarounds:


(Thanks to user zabo2012 on the veeam forums at for the awesome instructions)


boot the machine up in dsrm ( bcdedit /set safeboot dsrepair )

log in with ds repair mode password .\Administrator

run the bcdedit command to set and remove dsrepair mode ( bcdedit /deletevalue safeboot )

net stop ntfrs

open regedit and

Open Regedit
Browse to the following extension: HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Parameters
Add the following dword (32 bit) value: Repl Perform Initial Synchronizations
And leave this set to 0.

open regedit and expand: hklm\SYSTEM\CurrentControlSet\Services\NtFrs\Parameters\Backup/Restore\Process at Startup
Set the burflags to d2 (sometimes you will have to use d4, but only do this in isolated network or it will overwrite other DC’s during replication)



I noticed that although I was able to get other servers to authenticate off the DC after doing the above, I wasn’t able to access AD Users and Computers on the DC itself.

Seizing the roles from the other DCs (that are not available in the isolated test lab) fixed this.  To seize the other domain controller FSMO roles:

connect to server <dns name of local dc server>

seize schema master
seize naming master
seize rid master
seize PDC
seize infrastructure master


After seizing roles I now see the expected information in AD Users and Computers

Edit 2:

I continued to have problems with an Exchange server that was in the same test lab as the isolated domain controller so I made a few more changes:

I performed a metadata cleanup, removing all the domain controllers that were not in the isolated lab environment, using the GUI >

I then set the burflag to d4 (below) and restarted the domain controller.  After that exchange was working correctly.

open regedit and expand: hklm\SYSTEM\CurrentControlSet\Services\NtFrs\Parameters\Backup/Restore\Process at Startup
Set the burflags to d4



Categories: Uncategorized Tags:

Assigning a null value to some ASP.Net parameterized queries

January 14th, 2009 No comments

In some situations an error will be thrown when trying to assign “null” to a parameter for a query.  In this situation, assign “DBNull.Value” to the parameter.


thisCommand.Parameters.Add(“@datLastCalExpr”, System.Data.SqlDbType.DateTime).Value = DBNull.Value;

Categories: Uncategorized Tags: