In SAM, our objective is to identify attachment patterns to eventually categorise the attachment status of a child given a caregiver. By doing so, we will be able to support medical practitioners to focus their efforts on children that needs attention.
In order to identify attachment patterns, we are looking into behavioural cues from children when they play with our SAM game. We think that one of these cues could be hidden in the way children are manipulating the dolls while they enact their story. To verify this hypothesis, we had to look for a non intrusive mean of observing how children manipulate the dolls in the game.
There are several approaches we could follow. For example, we could measure the position of the dolls into space by analyzing the video recordings we are already collecting for facial behavioural cues. However, this option has many drawbacks. First, it is difficult to get continuous location of the dolls as they may be hidden by children’s hands or the furnitures on the mat. Another challenge is that this approach would require specific computer vision algorithms to identify the dolls in the video recordings, which would take time to design with unpredictable results at the current stage of the project. A more promising option would be to collect data directly from the dolls and get around the limitations of the computer vision alternative.
While sounding great, collecting information directly from the dolls rises many questions. To begin with, what the dolls should tell us? There are so many different type of data we could collect and analyse. For instance, like in computer vision, we could measure the spatial configuration of the doll in space. Measuring the pressure applied on the doll during the game could be another behavioural cue. We could also collect bioemetric data such as skin conductance or heart beat like many smart watches and sport bands already do. Unfortunately, time is a critical resource and in this project we won’t be able to investigate all the possible ways offered to us.
Then, the next question would look into what sensors to integrate to our toys. Fortunately, the high constraints of size and autonomy has guided us to get a good idea of what we needed. Finally, how the dolls should talk to us? This last question is as essential as the previous ones. The way the data are transfered from the dolls to the computer could affect the game experience. For example, if the dolls were connected to the computer using a cable, children would not be able to move the dolls freely during the game degrading the game experience overall. However getting rid of cables is not trivial. This raises again other questions: should we transfer data to the computer as the children are playing the game or should we store the data locally in the dolls and then transfer them to our secure storage server after the children are done playing? Cableless also means that our toys would need to be selfpowered. But what would be the impact of either data transfer solution on the energy consomption and the battery integrated to the dolls?
This post gave a quick overview of the challenges to take into consideration for designing artefacts to help us investigating new ways of measuring attachment. The next post will bring a few insights to overcome these challenges with a preview of our smart dolls prototype.