Find and list inactive Exchange mailboxes with powershell

So, I tinkered with this a while back and found the following commands very useful to complete the routine task most system admins have of disabling inactive mailboxes. In this case, I needed to list inactive mailboxes on our Exchange 2007 mailbox servers. Then, I needed commands or tasks to run against this list. Here’s my script:

1.Find mailboxes with no login in over 30 days and output list to a txt file. Edit the path to your mailbox database and path to txt file output:
Get-MailboxStatistics | where {$_.Database -eq “Server1StorageGroup1MailboxDatabase1” -and $_.LastLogonTime -lt (get-date).addDays(-90) -and $_.ObjectClass -eq “Mailbox”} | sort-object lastlogontime | ft DisplayName >C:ExchangeCleanupNoLogin90Days.txt
2.Use “.TrimEnd” to clean up trailing spaces and rewrite the cleaned up txt file:
$list1 = get-content C:ExchangeCleanupNoLogin90Days.txt
$list1 | foreach {$_.TrimEnd()} > C:ExchangeCleanupNoLogin90Days.txt
3.Disable the mailboxes that were in the NoLogin90Days.txt file:
get-content “C:ExchangeCleanupNoLogin90Days.txt” | disable-mailbox

Now, you’re correct that I could just take the output from step 1 and go straight into step 3. HOWEVER, I’d like to be able to review the list of inactive users, and edit the list if needed to avoid disabling any system or service mailboxes that don’t actually have users logging into them.
Another task I use this for is to move the inactive accounts to mailbox servers with more space. I usually run this before step 3 above:
get-content “C:ExchangeCleanupNoLogin90Days.txt” | move-mailbox -targetdatabase “Server2StorageGroup2MailboxDatabase2”

I strongly recommend that you please test these commands in a test environment before running them on your production systems. I’ve tested functionality on Exchange 2007 and 2010, but your environment may differ.
Rick Estrada
Pastebin for reference:

Installing HP System Management tools in vSphere

So I had this issue where I needed to modify RAID controller settings and add more drives on a DL380 G6 running VSphere 4.0 Update 1 while in production with no downtime.
Here’s how it’s done:

Download the “HP Management Agents for VMware ESX 4.x” here and the “HP Array Configuration Utility for Linux” here.

Using the vSphere client utility to copy the files to the server preferably in a new folder.

Now, SSH into the server and CD into the directory where the new files are and extract the .tgz file by run the following:
tar -zxvf hpmgmt-8.3.0-vmware4x.tgz

now run the shell script using the –install switch:
./hpmgmt/830/ –install

install the ACU by running:
rpm -i cpqacuxe-8.35-7.0.noarch.rpm

enable remote ACU control:
cpqacuxe -R

And that’s it.
Now you should be able to access the HPSIM homepage by browsing to
logging in with your vSphere root credentials, and now you should have a link to ACU at the homepage.


The correct URL for the HP System Management page is:

Easily display CDP connection info

Here’s a quick one-liner to efficiently display CDP (Cisco Discovery Protocol) information on your windows PC. I must give credit to this post.

First of all, download and install WinPcap and a copy of WinDump.exe here. WinDump is a runtime .exe, so no installation is necessary.

Now, WinDump is a command-line utility, so to easily access it, I recommend you put it in your windows or system32 directory so you can easily access it from the command-line in any working directory.

Use WinDump.exe -D to get your network connection’s identifier string.
C:WinDump.exe -D
1.DeviceNPF_{FD16AF8D-2700-46D5-8C2B-759B0C54991A} (Sun)
2.DeviceNPF_{39E87FB9-DB40-4476-8B05-601AB3F4CC08} (Microsoft)
3.DeviceNPF_{6588B9CB-A7E7-4998-A780-3652193EA45B} (Intel(R) PRO/1000 PL Network Connection)

Here’s the command format I use:
C:WinDump.exe -nn -v -i DeviceNPF_{6588B9CB-A7E7-4998-A780-3652193EA45B} -s 1500 -c 1 “ether[20:2] == 0x2000”

The command breakdown is similar to what is on the original post sample.
-nn displays output in numeric only format
-v displays verbose information
-i specifies the interface to use for the captures
-s specifies packet byte size to be snagged
-c exits the program after capturing one packet matching bytes 20 and 21 from the start of the Ethernet header for a hex value of 2000

The output of the command above after successfully capturing a CDP packet looks like this:
15:50:59.355171 CDPv2, ttl: 180s, checksum: 692 (unverified), length 418
Device-ID (0x01), length: 11 bytes: ‘Switch2’
Address (0x02), length: 13 bytes: IPv4 (1)
Port-ID (0x03), length: 19 bytes: ‘GigabitEthernet1/2’
Capability (0x04), length: 4 bytes: (0x00000029): Router, L2 Switch, IGMP snooping
Version String (0x05), length: 289 bytes:
Cisco Internetwork Operating System Software
IOS ™ Catalyst 4000 L3 Switch Software (cat4000-IS-M), Version 12.1(13)EW1, EARLY DEPLOYMENT RELEASE SOFTWARE (fc1)
TAC Support:
Copyright (c) 1986-2003 by cisco Systems, Inc.
Compiled Tue 18-Mar-03 07:33 by hqluong
Platform (0x06), length: 15 bytes: ‘cisco WS-C4507R’
Prefixes (0x07), length: 5 bytes: IPv4 Prefixes (1):
VTP Management Domain (0x09), length: 5 bytes: ‘vtpdomain’
Native VLAN ID (0x0a), length: 2 bytes: 129
Duplex (0x0b), length: 1 byte: full
AVVID trust bitmap (0x12), length: 1 byte: 0x00
AVVID untrusted ports CoS (0x13), length: 1 byte: 0x00

This info is great, there is lots of useful data: switch name, ip, interface, switchport native vlan, vtp domain, etc. But is not immediately clear what’s on the other end. So here’s a little bit of help.

I put this command and others in a batch file to simplify things and to initiate CDP capture from an icon. Start the batch file with the WinDump command and have the output echo into a .txt file.
WinDump.exe -nn -v -i DeviceNPF_{6588B9CB-A7E7-4998-A780-3652193EA45B} -s 1500 -c 1 “ether[20:2] == 0x2000” >RESULT.txt

Now, using the Find command, have it search the RESULT.txt file and output the data you like as so:
FIND /I “Device-ID” RESULT.txt
FIND /I “Port-ID (0x03)” RESULT.txt
FIND /I “Address (0x02)” RESULT.txt
FIND /I “Native VLAN ID (0x0a)” RESULT.txt

So now, just run the batch file, and when a CDP packet is captured, the output will display only the data you need as so:
Device-ID (0x01), length: 11 bytes: ‘Switch2’

Port-ID (0x03), length: 19 bytes: ‘GigabitEthernet1/2’

Address (0x02), length: 13 bytes: IPv4 (1)

Native VLAN ID (0x0a), length: 2 bytes: 100

This little script saves me so much time everyday, and is a great alternative to commercial software that does the same. If anyone has any ideas to help make it better, please let me know!

Citrix License Manager service fails

So in working on one of the DCs at work, I noticed errors in the app event log:

Event ID 4098 The CTXLMC service failed to start.
It was preceded by the error:
Event ID 4096 Could not load the Java Virtual Machine.
Then i remembered uninstalling JRE on this box a while back, because really, it’s a DC, why would it be there? Well, I got my answer.
But no matter how much I googled, I couldnt find an easy answer to what specific version of JRE I needed to install, until, in researching Citrix support, I  found:
We’re running Presentation Server 4.5 on a different box, but licensing on this box. And it seems the licensing services require JRE.
And after trying different versions, JRE 1.5.0 Update 9 worked just fine. The service started right up when manually started, with no errors in event logs.
Hope this helps!
Rick Estrada

It’s been a while, huh?

Sorry about not posting for a while, but I’ve been tackling projects left and right, trying to shorten the list on my boss’ desk.

So major issue broke out this morning: password changes in Active Directory were failing with error: 
The password does not meet the password policy requirements. Check the minimum password length, password complexity and password history requirements
Even through command line, (net user /domain username password) the error persisted. So I tried the usual, editing default domain policy, allowing/blocking inheritance on Domain Controller’s OU in GPMC, editing domain controller’s policy, the usual stuff you find googling the error. ( Nothing worked… until, I noticed the same error in the application event log, over and over:
The description for Event ID ( 5 ) in Source ( WinPSAFilter ) cannot be found. The local computer may not have the necessary registry information or message DLL files to display messages from a remote computer. You may be able to use the /AUXSOURCE= flag to retrieve this description; see Help and Support for details. The following information is part of the event: .
And I didn’t notice that it started at about the same time users began reporting issues changing their passwords. Well, the WinPSAFilter belongs to our SSO product, Computer Associates, CA SSO.
Long story short, this software sucks. I uninstalled the Password Sync Agent from all of our DCs, and that was it. Password changes were now allowed.
Does anyone out there use this product? Have you had problems like this with it?
We were going to use it in a test environment, but have now decided that it just isn’t going to work out.
I have another Cisco tip to post, but I’ll post after work.
Rick Estrada

Debian GNU/Linux 5.0 released

Rumors all week indicated it would be release on Valentines day, but all day yesterday, the site hadn’t changed. This morning, at 3.58 AM, I got the email, and confirmed it was available for download. I have a few errands to run today, but as soon as I’m done, I’ll install in on my x60s, and post about it.

I’ll keep you all posted. In the meantime, go download your copy:

Automate backups of SQL Express database

We have a workorder tracking system that runs on SQL Express 2005 edition. One of the many limitations of SQL Express, is its lack of the SQL Agent found on full versions of SQL, which allows automated tasks to be scheduled, one being scheduled backups.

So to work around this limitation, I created a scheduled task in Windows, which runs the following commands from a batch file to automatically backup the database:
OK, so one thing to do before you go out and use this, is you need to create a “Backup Device”, which you can do with SQL Management Studio Express. If you need a copy, just google it; it is a free download from Microsoft. The second and third commands are for basic DB maintenance, but it doesn’t hurt to run them as well, at the same time.
That’s about it, just edit the commands above, set your retention period (RETAINDAYS = x), put in a batch file, and create a scheduled task with your time requirements.
Email me if you have any questions or suggestions.
–Rick Estrada

Automating a DPM Recovery

OK, so we’re using Microsoft Data Protection Manager 2007 (DPM) as our backup solution for several of our SQL DB and Windows file servers. It’s pretty straight forward stuff, but like most new software, documentation for DPM is limited and “googling” for answers, yields limited help. Let me describe one problem we faced, then give you an overview of the solution I engineered.
The DB system runs on two clustered Live servers, and is required (by the vendor) to be replicated to a Report server for resource consuming queries. This is a SQL2005 implementation.
The vendor put a backup job in place which basically does a DB dump to a .bak file every night at 1AM, and hourly log dumps after that. We have implemented DPM with VSS to backup that .bak file nightly, as soon as the dump finishes.
Now, because the DB must be replicated to the report server nightly, we will be using that to validate our backups by restoring, nightly, as well.
So what we needed to do, looks like this:
1.SQL Dump @1am
2.DPM Backup of .bak file @2am
3.DPM Restore .bak to RPT @3am
4.SQL Restore on RPT @4am
The problem is, there is no way to automate restores in DPM’s GUI, it must be done through powershell. Here’s the script that did it.
$from = Get-Date –f “dd-MMMM-yyyy 00:01:00”
$to = Get-Date –f “dd-MMMM-yyyy HH:mm:ss”
–>>these two lines define the time parameters for the “searchoption”
$pg = Get-ProtectionGroup DPMGROUPNAME
$ds = Get-Datasource $pg[8]
–>>this sets the $pg variable, and $pg[8] uses ninth line from its output to set the $ds variable
–>>it gets a little complex below this line
$so = New-SearchOption -SearchString db_dump_file.BAK -FromRecoveryPoint “$from” -ToRecoveryPoint “$to” -SearchDetail filesfolders -SearchType contains -Recursive -Location “F:”
–>>with the search option variable set, you can now choose which item to recover
$ri = Get-RecoverableItem -Datasource $ds[2] -SearchOption $so
$ro = New-RecoveryOption -TargetServer -RecoveryLocation copytofolder -FileSystem -AlternateLocation “F:MSSQLDailyLiveRestore” -OverwriteType overwrite -RecoveryType Restore
–>>now comes the actual recovery command, much simpler command, only 2 options
Recover-RecoverableItem -RecoverableItem $ri -RecoveryOption $ro
Here’s all the commands without my comments:
$from = Get-Date –f “dd-MMMM-yyyy 00:01:00”
$to = Get-Date –f “dd-MMMM-yyyy HH:mm:ss”
$pg = Get-ProtectionGroup DPMGROUPNAME
$ds = Get-Datasource $pg[8]
$so = New-SearchOption -SearchString db_dump_file.BAK -FromRecoveryPoint “$from” -ToRecoveryPoint “$to” -SearchDetail filesfolders -SearchType contains -Recursive -Location “F:”
$ri = Get-RecoverableItem -Datasource $ds[2] -SearchOption $so
$ro = New-RecoveryOption -TargetServer -RecoveryLocation copytofolder -FileSystem -AlternateLocation “F:MSSQLDailyLiveRestore” -OverwriteType overwrite -RecoveryType Restore
Recover-RecoverableItem -RecoverableItem $ri -RecoveryOption $ro
Working backwards, it should become clear what parameters are required, before the next commandlet will work. Remember, that in a list of items, the first item is 0, not 1. Run the commands like “Get-ProtectionGroup DPMGROUPNAME” so you will know what to use for $pg[#] when setting a datasource. I used the command “Get-Date –f “dd-MMMM-yyyy HH:mm:ss”” because this runs at 3am, and I needed the the SearchOption to limit itself to recovery points between 1am and 3am, on the current day. As you can see, Powershell experience is a must.
OK, on to the automation side of things: PowerShell scripts should be saved with file extension .ps. Now, you can’t just double click the file and it executes; you have to tell windows powershell to load the DPM commandlets, to interpret the commands in the .ps file. Only then, can you make this a scheduled task.
Here’s how I did it. First, I customized the commands above for my environment, and saved them in a .ps file (C:ScriptsAutoRecoverSQL-DB-RECOVER.ps1). I then saved a batch file with the following command:
C:WINDOWSsystem32windowspowershellv1.0powershell.exe -PSConsoleFile “C:Program FilesMicrosoft DPMDPMbindpmshell.psc1” -command “.’C:ScriptsAutoRecoverSQL-DB-RECOVER.ps1′”
THEN I created the scheduled task to kick off the batch file, which kicks off the powershell script.
Because of the “-OverwriteType overwrite” option, the recovered .bak file will replace the previous one each night on the report server, and then SQL on the report server will recover it with its own scheduled job. NOTE: a DPM agent must be installed on the target server for DPM to recover to it.
And there it is. I know my post became a little vague towards the end, but I’m getting a little sleepy here. Please, feel free to email me if you need help getting this to work in your environment, or if you just have any questions.
If you took the time to read this, I really hope this has helped you out in your implementation of DPM.
Finally, I’d like to thank my good friend, Sassan K. for getting me started with Microsoft DPM last year.
-Rick Estrada

New Year… New Stuff…

OK, so far so good, day 5 of 2009, day 1 back at work…
I finally made time to finish my review of the HP Mini 2133. My boss bought a few to try out, one with Vista two with XP Pro. I’m actually typing this up on the Mini. It’s actually a pretty decent system. A netbook is a netbook, so I wasn’t expecting too much going in. I have found the keyboard to be the biggest challenge. My hands aren’t huge, yet they are big enough to make typing on the Mini a learning experience. So much, in fact, that I’ve become very well familiar with the backspace key. It seems to me that the keys around the pinkies and ring fingers that are a little too cramped. It’ll do for casual use on the go, but not for typing a novel.
The screen also surprised me. Despite being small, the resolution is great at 1280×768. My primary laptop can only do 1024×768. However, I wouldn’t stare at it all day. On the downside, it seems to get dingy with fingerprints easily, unless it’s just me being careless. (I’m terrible about fingerprints on a screen)
Performance was pretty good too. I found it more than enough for web browsing and streaming video. This model has 2GB, so I was able to run multiple office apps and web browsers with it not skipping a beat.
All in all, I would definitely recommend this netbook for travel or light use. It is by no means a full fledged laptop, nor a system you’d use 8 hours a day. But it’s perfect for on the go.
I’ll post pictures tomorrow for side by side comparison with other laptops.
Next time, benchmark numbers…
Rick Estrada