Urban environments increasingly incorporate service robots into public life. Sustainable planning demands a deeper understanding of how design choices impact human trust. Focusing on compact AI-driven robots intended for urban public spaces, this research examines how different degrees of design minimalism influence trust across diverse demographic groups distinguished by age, education, and technological familiarity. It addresses emotional engagement, functional opportunities, and the economic implications of manufacturing complexity, emphasizing the need for simple, cost-effective designs aligned with realistic user expectations. Using visual surveys, the study presents three types of compact robots (a handheld block, a rolling object, and a small flying drone) with different levels of human-like features. It proposes roles: assistance, security, or delivery. This study aims to identify how far minimalist design can go while maintaining public trust in compact AI robots. The results will help city planners and robot designers create affordable, practical robots that people can use in public spaces.
While cities are only beginning to explore service robot deployment, the pace of change suggests that a more practical and integrated approach will soon be essential. These robots will evolve rapidly, be updated like smartphones, and need regular maintenance, storage, and responsible integration. Without apparent public acceptance, cities risk wasting resources on robots that people will not trust or will cost too much to implement. Recent examples, like the mass abandonment of shared bicycles in China, show how promising ideas can collapse into waste without long-term planning.
Service robots could follow the same path, turning vibrant urban spaces into graveyards of forgotten machines more reminiscent of dystopian science fiction. Existing studies have separately explored robot aesthetics, trust dynamics, and public expectations. Research shows that simple, low-cost designs do not reduce trust when robots behave ethically (Willems et al., 2022). Trust depends more on a robot’s responsiveness and social engagement than on its appearance (Breazeal et al., 2013). Humanlike designs can boost emotional connection, but they sharply increase production costs and can mislead users if abilities don't match appearances (Easler et al., 2022). While significant advances have been made, an integrated model linking trust, robot design, AI functionality, and diverse user expectations for sustainable urban deployment remains largely undeveloped.
A clear understanding of public trust factors will allow cities to set practical design requirements that ensure affordable, functional, and adaptable robots that can serve diverse populations over time. This study investigates how varying levels of minimalist design influence public trust in compact AI robots performing tasks like assistance, security, and delivery. The goal is to identify the point where further simplification begins to erode trust. The findings will provide actionable insights for urban planners, manufacturers, and policymakers seeking to integrate AI-driven robots sustainably into smart city ecosystems.
Literature Review: To build on these insights, recent research on trust, robot design, and user expectations must be examined. Research into human trust in robots has accelerated globally in recent years, with significant studies exploring the effects of robot appearance, AI capabilities, and demographic differences. Findings show that minimalistic, low-cost designs do not necessarily harm trust if robots behave ethically and communicate their functions clearly (Collins, 2020). Trust is dynamic, evolving based on a robot’s performance, reliability, and transparency (Yang, Zhang, & Liu, 2024). Beyond individual studies, recent reviews have examined how robot design influences trust across different user groups and contexts (Campagna & Rehm, 2025). Visual trust assessments have been widely used, confirming that first impressions based on design features can strongly affect public willingness to interact with AI robots (Risi et al., 2023). At the same time, research points out that while human-like designs can enhance emotional engagement, they also increase production complexity and risk creating unrealistic expectations if real functional abilities do not match them (Easler et al., 2022). However, despite these results, critical gaps remain. Existing studies tend to isolate specific ideas, such as appearance, trust dynamics, or AI functionality, without integrating them into urban public environments (Haring et al., 2019). Few studies account for how trust varies across demographic factors such as age, education, and technological familiarity, especially in essential urban service functions like assistance, security, and delivery (Marcu et al., 2023).
Methodology.
The robot designs will illustrate varying levels of human-likeness in a controlled, stylized form.
1) Quantitative Analysis. This study will use structured visual surveys to assess public trust in compact AI-powered service robots. Approximately 90 participants will be recruited across age groups (18–24, 25–44, 45–75) and education levels (higher education / no higher education) to capture demographic diversity.
Participants will evaluate three robot types (a handheld block, a rolling object, and a compact drone), each presented with three levels of human-like design features (minimal, moderate, high), across three functional roles: information assistance, security aid, and delivery service. This structure will result in 36 evaluation scenarios per participant (3 robot types × 3 design levels × 3 roles), with an option to reject any role if trust is not established.
The collected data will be analyzed to identify trust thresholds related to design simplification, demographic differences in trust patterns, and the impact of perceived AI functionality. The results will guide practical recommendations for sustainable urban robot integration.
2) Qualitative Analysis. Open-ended questions will be included at the end of the survey to capture participants' subjective impressions and emotional responses to each design. Additionally, semi-structured interviews will be conducted with 3–5 experts in urban planning, AI ethics, and robotics design.
Hypotheses.
● Minimalistic design will maintain public trust when the robot’s purpose remains straightforward to interpret.
● Younger participants and those with higher education will trust minimalistic robot designs more.
● Public trust in minimalistic designs will increase when robots are perceived as having actual AI-driven abilities beyond simple programmed behavior.
Expected Outcomes.
This study is expected to identify how minimal human-like features in small AI robots influence levels of public trust in urban settings. It will reveal which specific design elements contribute most to trust and at what point added features no longer significantly improve perception.
Conclusion.
Smart cities need more than good intentions to successfully integrate service robots. The future will not be decided by technology alone. It will be decided by trust.
References:
Breazeal, C., DePalma, N., Orkin, J., Park, H. W., & Deyle, T. (2013). Crowdsourcing human-robot interaction: New methods and system evaluation in a public environment. Journal of Human-Robot Interaction, 2(1), 82–108. https://doi.org/10.5898/JHRI.2.1.Breazeal
Campagna, G., & Rehm, R. (2025). A systematic review of trust assessments in human–robot interaction. ACM Transactions on Human-Robot Interaction, 14(2), 1–28. https://doi.org/10.1145/3706123
Collins, E., & Eder, K. (2020). Trust in robots: Challenges and opportunities. Current Robotics Reports, 1(4), 234–241. https://doi.org/10.1007/s43154-020-00029-y
Easler, W. J., Steverson, S. P., Kelly, A. C., & Sycara, K. (2022). Human-robot interaction: The impact of robotic aesthetics on anticipated human trust. PeerJ Computer Science, 8, e837. https://doi.org/10.7717/peerj-cs.837
Haring, K., Castelfranchi, C., & Scheutz, M. (2019). Evaluating public opinion towards robots: A mixed-method approach. Paladyn, Journal of Behavioral Robotics, 10(1), 306–317. https://doi.org/10.1515/pjbr-2019-0023
Marcu, G., Lin, I., Williams, B., & Robert, L. P. Jr. (2023). Would I feel more secure with a robot?: Understanding perceptions of security robots in public spaces. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW2), Article 293. https://doi.org/10.1145/3610171
Risi, S., Mattfeld, D. C., & Schaer, P. (2023). Security aspects of social robots in public spaces: A systematic mapping study. Sensors, 23(19), 8056. https://doi.org/10.3390/s23198056
Willems, J., Schmidthuber, L., Vogel, D., & Ebinger, F. (2022). Ethics of robotized public service: The role of robot design and its actions. Government Information Quarterly, 39(4), Article 101683. https://doi.org/10.1016/j.giq.2022.101683
Yang, J., Zhang, Y., & Liu, C. (2024). Human trust in robots: A survey on trust models and their controls/robotics applications. IEEE Open Journal of Control Systems, 3, 184–199. https://doi.org/10.1109/OJCSYS.2024.3354336