Design
Icons History, its Usability and User Experience
Icons are everywhere. An icon is a small graphical representation of a program, file, brand or product. If well designed Icons are fast to recognize at a glance and can be an essential part of many user interfaces, visually expressing objects, actions and ideas. On the other hand, when done incorrectly, it can create confusion, and completely affect the user experience.
History of icons
Icons are rather more recent invention with the first real icon based GUI only appearing for consumers in 1981!
The Xerox Star
Xerox is credited with developing the first GUI (graphical user interface) in the early 1970s. However, the Xerox Alto would lend all its aspects to the Xerox Star which in 1981 became the first ever consumer release model to use icons. These icons such as trash cans and folders and printers, have remained nearly unchanged all the way through to today.
The Apple Lisa and The Apple Macintosh
The Macintosh was released in 1984 and the machine’s icons are legendary. The artist Susan Kare designed the icons for this machine and she said; “I believe that good icons are more akin to road signs rather than illustrations, and ideally should present an idea in a clear, concise, and memorable way. I try to optimize for clarity and simplicity even as palette and resolution options have increased.”
Susan Kare would go on to design the icons used in Windows 3.1 too in 1992.
The Amiga Workbench
It took until 1985 before icons became more than just black and white representations. The first four colour icons appeared on the Amiga 1000. This also allowed for multi-state icons; icons which showed you at what phase of a process you were.
What Do You Test When You Test an Icon?
Different testing methods address different aspects of icon usability. But what makes an icon usable? Here are 4 quality criteria for icons:
- Findability: Can people find the icon on the page?
- Recognition: Do people understand what the icon represents?
- Information scent: Can users correctly guess what will happen once they interact with the icon?
- Attractiveness: Is the icon aesthetically pleasing?
All of these issues will be critical for the success of the final design, but must be considered separately to determine how to improve an icon.
The benefits of icons as said by Aurora Harley in a graphical user interface (GUI) include:
- Icons make good targets: they are typically sized large enough to be easily touched in a finger-operated UI, but also work well with a mouse cursor (in contrast to words, which can suffer from read–tap asymmetry on touch screens).
- Yet they save space: icons can be compact enough to allow toolbars, palettes, and so on to display many icons in a relatively small space.
- Icons are fast to recognize at a glance (if well designed) — particularly true for standard icons that people have seen and used before.
- There is no need to translate icons for international users, provided that the icons are mindful of cultural differences (for example, mailboxes look very different in various countries whereas envelopes look the same, therefore an envelope is a more international icon for an email program than a mailbox).
- Icons can be visually pleasing and enhance the aesthetic appeal of a design.
- They support the notion of a product family or suite when the same icons and style are used in several places.
It’s not that icons can’t work by themselves, but that most people have a fairly limited vocabulary. Floppy disk = save. Printer = print. Play, Pause, Stop, Forward, Back all got defined by tape players from the 1980s.
And, yes, if an icon is ideographic enough, it can be infused with meaning and remembered–take the “Apple” menu in Mac OS, for example. But the richness is just not there relative to human language. (Especially considering that I already know how to speak English; it’s a lot of work to learn how to speak “Iconese” on top of that.)
While Jared Spool, UIE stated after usability testing:
“In the first experiment, we changed the pictures of the icons, but kept them in the same location. We found, in general, users quickly adapted to the new imagery without much problem, particularly for commonly used functions.
In the second experiment, we kept the original pictures, but shuffled their locations on the toolbar. To our surprise, users really struggled with this. It really slowed them down, and, in several cases, they could not complete common tasks. (The icons were all visible, they just had trouble finding them in their new locations.)
From these results, we inferred the location of the icon is more important than the visual imagery. People remember where things are, not what they look like.” (via User Interface Engineering)
Don Norman says that “Inscrutable icons litter the face of the [Apple] devices even though the research community has long demonstrated that people cannot remember the meaning of more than a small number of icons. Icon plus label is superior to icon alone or label alone. Who can remember what each icon means? Not me.
“Universal” Icons Are Rare
There are a few icons that enjoy mostly universal recognition from users. The icons for home, print, and the magnifying glass for search are such instances. Outside of these examples, most icons continue to be ambiguous to users due to their association with different meanings across various interfaces. This absence of a standard hurts the adoption of an icon over time, as users cannot rely on it having the same functionality every time it is encountered.
For example, if you visit an e-Commerce site, you expect the shopping cart or bag icon to be in the top, right-hand corner of the screen. When you’re logged into a SaaS, you expect your user profile and account settings to be symbolized by a person icon (or your headshot) in the top, right-hand corner of the screen.
If someone changed those familiar placements, it would be difficult for you to find the icons.
For Great User Experience remember when using icons:
- Label and image is better than just one or the other (image or text). However, if using only one text works better than just the image.
- Icon images will be learned, the position of the icon is learned quicker. If you change the image, but the location remains the same, visitors usually won’t notice. However, if you change the location and keep the image the same visitors will become frustrated.
- The speed the average visitor will recognize an icon’s meaning from the image alone is directly proportional to how quickly the team can decide on which icon to use. Meaning, things that are obvious to a designer (i.e. question mark for help) are more likely to be obvious to a visitor but things that aren’t as obvious, say maybe return policy, are more difficult to understand.
- Universally understood icons work well (ie. print, close, play/pause, reply, tweet, share on Facebook).
- Icons can serve as bulletpoints, structuring content (ie. file type icons for PDFs, DOCs, etc.).
- Good icons can make the look of an app or a webpage more pleasing.
- Don’t use an icon if its meaning isn’t 100% clear to everyone. When in doubt, skip the icon. Reside to simple copy. A text label is always clearer.
- If you want to keep the graphical advantages of icons, you can of course combine the icon with copy. It’s an excellent solution that unites the best of both worlds. The Mac App Store is doing exactly this. It’s almost mandatory here, because the icons themselves would be totally unclear.
98,315 total views, 6 views today
Design
Development of Explainable AI (XAI)
Artificial Intelligence (AI) is a rapidly evolving field that has the potential to change the way we live and work. The latest research in AI is focused on developing more advanced and sophisticated AI systems that can perform a wide range of tasks with greater accuracy and efficiency.
One area of AI research that has gained a lot of attention in recent years is deep learning. This is a type of machine learning that uses neural networks to model complex patterns in data. Deep learning has been used to achieve breakthroughs in areas such as image recognition, natural language processing, and speech recognition. AI is also expected to have a significant impact on the field of robotics. Advancements in AI are making it possible to develop robots that can perform a wide range of tasks with greater autonomy and intelligence. This has the potential to revolutionize industries such as manufacturing, transportation, and healthcare.
Another area of AI research that is attracting a lot of attention is the development of generative models. These are AI systems that can generate new data, such as images or text, based on what they have learned. This has the potential to revolutionize fields such as art and design, music, and writing. Another area of research is the development of explainable AI (XAI), which aims to make AI systems more transparent and understandable. This is important for ensuring that AI systems can be trusted and used responsibly. XAI has been recognised by AI researchers as a crucial component of reliable AI, and explainability has recently attracted more attention. To address growing ethical and legal concerns Explainable artificial intelligence (XAI) is a useful tool for as well as important How? and Why? questions about AI systems. However, despite the demand for explainability across several disciplines and the growing interest in XAI research, XAI still has a number of drawbacks.
The creation of AI systems that can clearly and transparently explain their decision-making processes is known as explainable AI (XAI). This is crucial in circumstances when an AI system’s decisions could have broad repercussions, such as in the legal, financial, and healthcare systems. Here are a few instances of XAI in action:
- Healthcare: An AI system that diagnoses medical issues must be able to justify its findings by referencing the patient’s medical history, test results, and other pertinent information.
- Finance: An AI system that evaluates loan applications must be able to clearly explain the reasons a loan was authorised or denied, taking into account elements like income and credit history.
- Legal: An AI system that helps judges make sentencing decisions must be able to provide a clear explanation of how it arrived at its recommendations, taking into account factors such as the defendant’s prior criminal history, the circumstances of the crime, and relevant laws.
In each of these examples, the ability to explain the decision-making process of an AI system is critical for building trust and ensuring accountability.
It is important to be aware of the potential of this technology and actively seek ways to harness its power for the benefit of society as a whole. The latest research in AI is focused on developing more advanced and sophisticated AI systems that can perform a wide range of tasks with greater accuracy and efficiency. From deep learning, generative models, explainable AI and robotics, the potential applications of AI are vast and it is expected to play an even greater role in the coming years, leading to new and exciting opportunities for innovation and progress.
10,336 total views, 50 views today
Design
Extended Reality (XR), an evolving technology
Extended Reality, or XR, is a catch-all phrase that refers to a variety of technologies, including Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). These innovations enable the development of immersive and interactive experiences that converge the real and virtual worlds. In the world of entertainment and gaming, XR has several applications. Virtual worlds and games that can transport users to other locations and eras can be created using VR and MR. The fields of training and education are further applications for XR. Users can learn and hone new abilities in a secure environment by using VR and AR to create realistic simulations and scenarios.
The performance and responsiveness of XR applications have recently improved because to the utilisation of edge computing and 5G. Edge computing allows data processing to occur closer to the user, which reduces latency and increases responsiveness. The use of AI and machine learning to enhance the realism and interactivity of XR experiences is another breakthrough. For instance, MIT researchers have created a virtual reality (VR) system that uses AI to create realistic scenes and characters that react to the user’s input in real time.
A rapidly developing technology, XR has numerous potential uses across numerous industries. There will probably be more advancements and use cases in the near future since it enables the construction of immersive and interactive experiences that blur the boundaries between the real and virtual worlds.
10,357 total views, 44 views today
Design
Multi-material printing and innovation in hybrid manufacturing
A type of 3D printing called multi-material additive manufacturing allows for the simultaneous printing of numerous materials, each with a variety of unique features. This technology has a wide range of applications and the power to completely alter how goods are created. The production of intricate and personalised products is one use for multi-material printing. It can be used, for instance, to print items with various textures, colours, and even degrees of hardness or flexibility. This makes it possible to produce items that would be challenging or impossible to make using conventional manufacturing techniques.
Engineering and prototyping both use multi-material printing. It can be used, for instance, to make workable prototypes of things like gears and bearings, that have different properties in a single print. This can greatly speed up the prototyping process and reduce the costs associated with creating multiple prototypes. Multi-material printing also has applications in the field of medicine. For example, it can be used to create customized prosthetics and other medical devices that have different properties in a single print. This allows for the creation of prosthetics that are more comfortable and functional for the patient.
New printing methods and materials have been used recently in multi-material printing. As an illustration, MIT researchers have created a technique for printing with several materials using a single nozzle, enabling the production of things with various qualities in a single print. the practise of “multi-material jetting,” which enables the use of a single print head to print numerous materials simultaneously. For instance, the J750 3D printer, and J850, which aims to “push the boundaries of 3D printed realism” from Stratasys can print with up to six different materials simultaneously, such as transparent materials, rigid and flexible plastics, and even color-changing materials.
Innovation in “hybrid manufacturing,” which mixes various production techniques including 3D printing, CNC machining, and casting to produce items with distinctive features. For example, researchers at the Technical University of Munich have developed a hybrid manufacturing process that allows for the printing of high-strength aluminium parts with embedded electronics.
Multi-material printing is a rapidly evolving technology with many potential applications in a wide range of industries. It has the ability to produce complex and customized objects that would be difficult or impossible to create using traditional manufacturing methods, and it’s likely that we will see more developments in the near future.
9,225 total views, 41 views today
-
Editor's Picks6 years ago
The Ontology of Designing Self
-
Science6 years ago
Human Cognition Processes And Types
-
Design8 years ago
Don Norman’s Principles of Design With Examples
-
Design8 years ago
Jakob Nielsen’s (Usability Heuristics): 10 Heuristic Principles With Examples
-
Design7 years ago
Gamification Explained And How to Gamify Your Daily Life Part 2 of 2
-
Editor's Picks7 years ago
Lost in the Woods
-
Design7 years ago
Human Factors, Ergonomics and Design
-
Art7 years ago
History of Typography-Part 3 of 3