-- Find some new blood. The problem with using elite employees is that they are awesome at what they do. They are exactly who you want saving the business during a major disaster. Unfortunately, when the real "stuff" hits the fan, you might have no choice about who answers your cry for help. That might mean your human resources manager needs to reboot a server because she was the first person on the scene.
Populate the business continuity test team with newer employees, particularly ones from other parts of the business. They won't have the organizational wisdom to read between the lines and fill in the missing steps in your documentation. This approach is also a little Machiavellian because the stress element will be heightened for your participants, simulating the emotions they would feel in an actual crisis.
-- Eliminate one of the key elements of the test. When was the last time only one element was affected by an outage? Yeah, didn't think so. Mimic the true FUBAR situation and take out some random hits on your infrastructure. Don't let them access the share drive. Take down the email server (or block their IDs from using it so you don't disrupt the flow of work for other users). Cut off the Internet. Many disaster recovery tests rely on other critical systems that might be crippled by the crisis. By removing some variables, you will get an idea of how rigid your disaster plan really is and how creative your staff can be. If anything, without the Web, you'll prevent them from surfing Reddit during the slow moments.
-- Eliminate a creature comfort. Imagine trying to bring a server back online without air conditioning in the middle of August. Or perhaps trying to run critical communications from the free Wi-Fi in a waffle house. It might be as simple as conducting a tabletop exercise without any tables or chairs, but it should be just enough to throw your team off balance. If you really want to satisfy your inner-Ebenezer Scrooge, only allow them to use one extension cord to simulate low power availability, then watch with delight as they manage their limited resources and battery life. If you want to be brutal, put that extension cord in the hallway and make it too short to reach the workroom.
Schedule the disaster recovery planning session to last at least 24 hours. Real service outages usually aren't solved neatly in eight hours. Depending on the crisis, you could be looking at seven days.
By scheduling a marathon test, the team is forced to pace itself and also come up with contingency plans that allow people to tag out and get some sleep.