Last week was one of surveys and tests. I'm involved in a review of a telework pilot project involving a survey and also the implementation of a survey on work style and work practices. Plus that week a couple of the dissertation students I am working with are planning their research design which involves selecting a survey instrument and devising methods of using it effectively. Additionally I got an email saying I had to take a mandatory training and pass an end test on this. I found myself doing lots of thinking around getting the most from tests and surveys, and having some interesting discussions on this.
On the telework pilot project we're going to take multiple approaches to data gathering. So we spent a full day working out the time lines, the methods (including: survey, 1:1 interviews, focus groups with randomly selected participants, review of measurement data collected e.g. sickness and absenteeism rates, customer satisfaction scores, etc. and review of the communications strategies and materials) so it will be a labor intensive next few weeks on this. What we're hoping is that we will get information that we can use to develop best practice guides for others who want to introduce or extend teleworking practices.
On the work styles and work practices survey we're aiming to find out whether people think of themselves as 'desk bound', internally mobile, or externally mobile and within these three categories whether they feel their work is interactive or concentrative. So, someone who writes policy papers which demands desk research and then sitting at a computer screen composing, might describe himself as 'deskbound concentrative': he is not interacting very much with other people, and not spending much time away from his desk at meetings or events.
Interestingly the UK Office of Government Commerce has come up with a similar six categories of workstyle albeit slight different language described in their booklet Working Beyond Walls,
Obviously the categories are indicators only. The policy paper writer might initially have to do interviews, go to other locations to do research and so on. In the course of planning and writing the paper he may feel he could describe himself as any one of the six types of workers. What we're looking can never be exact – it's a subjective data point – which in my view is the key limitation of survey responses, unfortunately they are often taken to be objective and decisions made on the basis of them.
But we're hoping that this survey information when combined with the follow-up will give us enough information to think about space in innovative ways and save on corporate real estate, carbon emissions, and so on. Some of the innovative thinking and ways of saving are described in the US Government's General Services Administration booklet Leveraging Mobility, Managing Place which provides a good comparison to the UK booklet I just mentioned.
The other limitations of this survey are again that of all surveys – will the respondents be truthful, are the questions the right ones, will we learn anything useful, will we get enough responses to make an adequate sample? We're going to run some face to face focus groups to try and get more data but in this instance a further limitation is that respondents are being asked about current work styles which may not help that much as we are building an office design for three – five years out.
The students working on their dissertations struggle with the survey aspect. They often want to design their own not realizing that you need to be very skilled to construct something that yields good data (hence some of my reservations on the workplace survey I described). I find I spend a good amount of effort to encourage them to use something that has been well used and validated by others. A good text on survey design is Floyd Fowler's book Survey Research Methods that covers a lot of ground and demonstrates to students that survey design is not just about thinking up some questions. I must reread it before starting to design the telework survey.
The mandatory training, that involved an end test, I didn't do for reasons you'll see was, I thought, a prime example of a poorly designed approach. Because I did not have time to work through the information/education element (the point of the whole thing) I went straight to the end test and passed never having read a word of the training material. So someone is going to check the box on people having taken the training (as evidenced by the end test completion) which says nothing about the application of the training, the value of the investment in developing the whole program, the quality of my application of anything I should have learned, etc.
I mentioned to a colleague that I thought going straight to the end test and passing was a missed opportunity for me to learn something that might be useful, interesting, or worthwhile. But her view differed from mine. She thought if I could pass the end test without working through the material then why bother? I must have enough knowledge of the topic to feel confident in my abilities.
What are your views on survey designs and end tests?