Topic outline

  • Chapter 1 - Overview of RFID Sensor Technology


    Author: Boris Antić, Faculty of Technical Sciences, University of Novi Sad, Serbia

    This chapter introduces the RFID technology with emphasis on sensor applications and in particular on their applications in agriculture. The chapter provides an overview of the RFID systems based on their operating frequency, readout range, reader and tag power supply, data rates and prices. It further introduces the concept of the inverse RFID design and how various analogue and digital sensors can be developed using the RFID. Some state-of-the-art innovation is presented to demonstrate the current limits of the technology. In line with the SMART4ALL verticals, a special emphasis is given to agricultural applications and RFID incorporation into various EDGE systems that are not continuously online but that can be ad-hoc read out and powered up to fit the restrictions of the Customised Low-Energy Computing (CLEC) for CyberPhysical Systems (CPS) and the Internet of Things (IoT). Part of the presented material deals with some original research results obtained in various projects conducted by the chapter author at the University of Novi Sad, Serbia.

  • Chapter 2 - UAV platforms for semantic scene analysis in agricultural applications


    Author: Branko Brkljač, Faculty of Technical Sciences, University of Novi Sad, Serbia

    Motivated by recent trends in the field of embedded vision platforms, this chapter discusses potential of such solutions in providing foundations for the next generation of Cyber-Physical Systems (CPS). Improved capabilities and reduced price of these platforms will have profound effect on their everyday usage and applications. In comparison to speech and natural language processing, which have established speech recognition and machine translation applications as indispensable in many contemporary CPSs, the vision community is still searching for an application that would be so necessary and desirable to make most of the consumers buy specific vision hardware just to run it. That would be the ultimate proof of the core value of the technology in the market. Thus, also vision problems come with a longstanding tradition and history of numerous solutions, it is still hard to point out a single application that would incorporate many specific vision tasks into one device, and which would be ubiquitously useful and affordable to all (e.g. like smartphone has done in the fields of communication and personal computing). However, with development of new miniaturization technologies and spatial AI it is reasonable to expect that there will be more possibilities for designing CPS with capabilities of visual understanding of outdoor, dynamic and uncontrolled environments. One step in such direction are embedded vision platforms that besides powerful computing capabilities also provide multimodal perception, and thus improve the algorithm performance. As an example, a stereo depth perception in the context of new spatial AI platforms like OAK-D lite, and point out some possibilities for its improvement and integration into future CPS is discussed.

    The course participants are encouraged to first read the scientific paper before moving to the power point presentation and the videos clips.

  • Exercises


    Author: Tjark Schütte, Leibniz Institute for Agricultural Engineering and Bioeconomy e.V. (ATB), Engineering for Crop Production - Agromechatronics, Potsdam-Bornim, Germany

    These exercises contain a presentation on localising robots and transforming data in Robot Operating System (ROS), with the emphasis on how the RFID data could be read out and geo-tagged by robots in form of unmanned ground vehicles and/or drones. Possible both or either one of two scenarios are considered:

    scenario #1: we know where the robot is, but not the sensors,

    scenario #2: position of the sensors is known (e.g. from UAV imagery) but not the robot position with an option to use the RFID-sensors to update the position.

    The course participants are invited to solve practical tasks of collecting and geo-tagging the data in either scenario 1 or 2 on a simulated mobile robot on a virtual machine. The robot has movement primitives ready and the sensors provide the ROS messages. Scenario 1 is definitely the easier one, especially having in mind the restrictions intended for this course and the complexity of the localization theory.