Содержание статьи
A user interface design case study based on an ongoing research in swarm robotics.
Introduction
My husband works in a research team that studies collective behaviour in robotic swarms. The team studies how a large number of robots can coordinate themselves to accomplish a cooperative task in an autonomous, decentralised fashion. This means that usually the robots are working all by themselves, but during the development process it is sometimes useful to have a human-in-the-loop to monitor the swarm and control it.
Aside from working on the robotic platform, they present their work in academic journals and conferences, and showcase their findings to the project’s stakeholders and potential investors. In an academic environment, they have no trouble communicating by using their existing tools and methods because the audience speaks their language. However, investors and stakeholders may have different profiles and not necessarily have a strong science or engineering background.
A Tale of Two Interfaces: CLI and GUI
During the research and development phase of the swarm, the team uses a ‘Command-Line Interface’ (CLI) to interact with the swarm. A CLI is a text-based interface, where the user can issue instructions by typing commands and read the output text from a screen. The Mac Terminal is an example of a CLI.
Having a CLI as the primary interface of the swarm is one of the difficulties during their presentations and live demonstrations to non-technical audiences. Most people are not comfortable with a CLI and is usually not very easy to grasp unless one has experience programming. Thus, they opted to develop a simple ‘Graphic User Interface’ (GUI) to make their presentation more accessible.
Even with the limited functionality of the current GUI, it has helped immensely during presentations in showing the capabilities of the swarm in a clear and understandable manner.
A GUI makes it easier for people to imagine potential applications of a technology. Especially when the GUI is simple enough that even people without extensive programming training can operate it themselves.
Developing an effective GUI requires considerable planning and design. At the moment, the research is still in its early stage and not yet ready for an extensive GUI implementation. That being said, it’s still such an interesting challenge that I thought it would be a great opportunity for a design case study.
Creative Development
The objective of this exercise is to design a GUI for a robotic swarm that will showcase its capabilities in presentations. No applications to specific industries are considered at this stage.
Identifying Primary Features
I started by interviewing the team about their research, with an emphasis on their experiences during presentations and demos. I asked them what were the recurring questions from the audience about the swarm’s capabilities. We listed down the activities the swarm is designed to do and the most popular questions they usually encounter when interacting with investors and stakeholders. Their feedback contained a lot of great insights on what people expect to do with the technology. The list we composed guided me on what features and commands to include in the interface.
It all boils down to four key points:
1.) Map and scan area: This is the most important function of the swarm. Fundamentally, the research is centred around the capacity of a robot swarm to map an area. Mapping areas and how to present this information will be the main focus of the interface design.
2.) Define groups within the swarm: One fundamental research question in the field is how robots can self-organise and group themselves to perform specified tasks. During demos, the audience is usually very curious on how the swarm divides itself into groups.
3.) Send command to the swarm and to specific defined groups within the swarm: The team sends commands to the swarm by coding input on a terminal. The interface should make this process easier, especially for a layman with no programming background.
4.) Swarm autonomy: The other big challenge is how to integrate the autonomy of a swarm into the interface. Part of what makes a swarm collective fast and efficient is that, as a group, they have the capacity to decide among themselves which is the best course of action depending on the situation.
The interface should allow for a partial control of the swarm by the user, while keeping the advantages of the swarm’s autonomy and its ability for collective decision-making.
Sketching and Lo-fi Wireframes
After the initial research phase, it is time to start sketching and scribbling ideas. First, I mapped out different scenarios and created their user flow. I also made a mood board of various interfaces and dashboards from different programs and studied which layouts and styles would incorporate well into the swarm interface. Throughout the design process, I targeted my design for an iPad screen set on landscape.
Based on their existing GUI, I established that the interface should have a large canvas area where the swarm’s mapping data is presented to the user. That’s why I gravitated towards design softwares like Sketch, Adobe XD, Illustrator, AutoCad, Google SketchUp, Maya and other similar programs. However, this raised a concern: since these are the tools I am used to as a designer, I might have a bias. The research team, on the other hand, are engineers and scientists who are used to working with code on a terminal.
I prepared lo-fi wireframes and did a quick usability test with my husband and two other members of the team to see if the interface layout would be intuitive for them. After the initial round of user testing, I got a lot of good feedback and useful input from the team. As it turns out, the research team liked the configuration and didn’t have a hard time figuring out how to work with the interface.
Branding and Visual Design
The current iteration of the robotic platform is named “Orion,” after the constellation. Each of the robots in the swarm carry the name of one star in the Orion constellation. I thought it’s very poetic, so I decided to carry on with this naming system. Thus, I named the interface Orion.UI.
The main visual of the logo is composed of 3 connected circles, the symbol of the Orion constellation that is also a great visual metaphor for inter-connected robots.
To avoid unnecessary clutter on the screen, I thought a minimalist and structured look would be appropriate. I applied Google’s Material Design principles in choosing the interface’s overall visual aesthetic and colour palette. As a treat for the team, who likes working on a terminal, I picked a monospaced font to emulate a CLI feel.
The main navigation bar is at the core of the interface. During sketching phase and usability test with the team, I tested placing the navigation bar in each of the sides (top, bottom, left and right) of the screen to see what was the easiest and most natural. The usability test made me realise that when the bar is placed at the right or the bottom side the testers didn’t engage with it and barely noticed it. Having the bar on the top side blocks the main view every time a hand tries to tap an item on it. That leaves the left side, which seem to be the most intuitive location.
I did a few iterations and decided to have a mix of the icons and the description. The icons break the visual monotony and the text provides clarity to what the icons mean. To avoid tapping on the wrong item, each button on the main navigation bar was spaced out evenly with enough breathing room. This is also applied on all the other elements in the GUI.
Bottom Tabs
The bottom tabs contain the group, area and command options of the GUI. These are crucial sections because this group is where the user can manipulate and command the swarm.
Group Options Tab: The Group Options can be accessed by expanding the group tab. This section will let let the user create or select a specific group within the swarm.
The user does not need to input information to create groups within the swarm. The self-organising behaviour of the swarm gives them the ability to group themselves. After the swarm clusters into groups, the group name will appear as an option that the user can select under the “group options” tab. How they group themselves will depend upon prior parameters set in the programming phase. Some examples of grouping:
- All robots within the swarm entering a room
- All robots with a battery level below 50%
- All robots that move to a different floor in the area they are mapping
Apart from these automatic groups, the user can also create a group by selecting and labelling specific robots on the main view through the ‘Create Group’ function located in the navigation bar.
Area Options Tabs: Similar to the Group Options Tab, the Area Options Tab can be accessed by expanding the tab at the bottom. When the swarm starts mapping, the swarm will also have the autonomy to identify special areas. The identified area will appear as an option that the user can select. Some examples of areas:
- Rooms (After the swarm identifies a room, a camera icon will appear on where it is on the main view so the user can tap on it and view room’s video feed.)
- Elevated spaces
- Entryway
- Different floor level (When the swarm detects and starts mapping another level, the 3D icon view will be activated and a new window will be created for the new floor. This will signal to the user that there is another floor.)
Similar to the group options, the user can also choose to plot an area through the ‘Select Area’ function in the navigation bar and add it to the area list.
Command Options Tab: The Command Options can also be accessed by expanding the Command Options Tab. The most frequent commands are in the command menu and can easily be selected by the user. Some commands in the list are:
- Map selected area
- Avoid selected area
- Redistribute swarm
- Go to safe point
If there’s a special command that is not included in the menu, the user can opt for manual input.
Operating Orion.UI
To make the interface easy to use, I had to figure out how to make the actions and commands less technical. That includes simplifying technical jargons into more common terms. I also tried to anticipate how people expect to operate the interface.
New Mapping
The user can easily start a new mapping session by clicking on ‘New Mapping’ on the start screen. From there, the swarm will start sending their signal to alert the user that they are on standby and ready to take in commands. Once the user hits ‘Start’, the swarm will start mapping and the data will be rendered on the main view of the interface.
Selecting an Area and Issuing a Command
The user can select an area from the area list generated in the Area Options Tab and then select a command to execute. The user can either specify who will perform the action (a selected group or the entire swarm) or take advantage of the swarm’s autonomy and leave it up to the swarm to decide who will execute the command.
The user can also tap or draw on the main view to select an area.
Creating Groups
Similar to selecting areas, the user can tap a robot on screen or draw a circle enclosing a group of robots in the main view to select or make them into a group. A dialog box will confirm if the user wants to create a group or plot an area with the gesture. The user can skip this if they selected the ‘Create Group’ icon on the navigation bar before drawing on the main view.
The selected group can also be added to an existing group through the options provided in the group dialog box.
The user can also create multiple groups or areas.
Detecting Events
Events are things that are causing inconsistencies in mapping data. For example, an object that moved.
The swarm can be programmed to do actions when certain events are detected and they will sort it out on their own. All actions will be recorded in the mission log that the user can review later. If the swarm detects an unexpected event and needs input on how to proceed, the user will be alerted.
Conclusion
A few decades ago, drones were only operated by people with specialised training. Now, anybody can own and operate one. They have gotten a lot cheaper and the technology has improved a lot, to the point that a drone can be operated with just a mobile phone. Having a good interface definitely helped bringing this technology to a wider audience. Long before the drone, computers experienced a similar transformation. They went from a specialised tool for a niche audience to an ubiquitous appliance when graphical interfaces were introduced.
Swarm robotics is an interesting new field with a lot of potential applications. I’m excited to see how swarm collectives will evolve and how it will affect our daily lives.
Orion.UI is a very inspiring and challenging case study for me. It’s mostly science fiction at the moment, but I hope it will come to life in the near future.