-
Shoemaker posted an update 9 months, 1 week ago
Currently, gathering statistics and information for ice hockey training purposes mostly happens by hand, whereas the automated systems that do exist are expensive and difficult to set up. To remedy this, in this paper, we propose and analyse a wearable system that combines player localisation and activity classification to automatically gather information. A stick-worn inertial measurement unit was used to capture acceleration and rotation data from six ice hockey activities. A convolutional neural network was able to distinguish the six activities from an unseen player with a 76% accuracy at a sample frequency of 100 Hz. Using unseen data from players used to train the model, a 99% accuracy was reached. With a peak detection algorithm, activities could be automatically detected and extracted from a complete measurement for classification. Additionally, the feasibility of a time difference of arrival based ultra-wideband system operating at a 25 Hz update rate was determined. We concluded that the system, when the data were filtered and smoothed, provided acceptable accuracy for use in ice hockey. Combining both, it was possible to gather useful information about a wide range of interesting performance measures. Eflornithine cell line This shows that our proposed system is a suitable solution for the analysis of ice hockey.Aquaculture farming faces challenges to increase production while maintaining welfare of livestock, efficiently use of resources, and being environmentally sustainable. To help overcome these challenges, remote and real-time monitoring of the environmental and biological conditions of the aquaculture site is highly important. Multiple remote monitoring solutions for investigating the growth of seaweed are available, but no integrated solution that monitors different biotic and abiotic factors exists. A new integrated multi-sensing system would reduce the cost and time required to deploy the system and provide useful information on the dynamic forces affecting the plants and the associated biomass of the harvest. In this work, we present the development of a novel miniature low-power NFC-enabled data acquisition system to monitor seaweed growth parameters in an aquaculture context. It logs temperature, light intensity, depth, and motion, and these data can be transmitted or downloaded to enable informed decision making for the seaweed farmers. The device is fully customisable and designed to be attached to seaweed or associated mooring lines. The developed system was characterised in laboratory settings to validate and calibrate the embedded sensors. It performs comparably to commercial environmental sensors, enabling the use of the device to be deployed in commercial and research settings.Handwritten keyword spotting (KWS) is of great interest to the document image research community. In this work, we propose a learning-free keyword spotting method following query by example (QBE) setting for handwritten documents. It consists of four key processes pre-processing, vertical zone division, feature extraction, and feature matching. The pre-processing step deals with the noise found in the word images, and the skewness of the handwritings caused by the varied writing styles of the individuals. Next, the vertical zone division splits the word image into several zones. The number of vertical zones is guided by the number of letters in the query word image. To obtain this information (i.e., number of letters in a query word image) during experimentation, we use the text encoding of the query word image. The user provides the information to the system. The feature extraction process involves the use of the Hough transform. The last step is feature matching, which first compares the features extracted from the word images and then generates a similarity score. The performance of this algorithm has been tested on three publicly available datasets IAM, QUWI, and ICDAR KWS 2015. It is noticed that the proposed method outperforms state-of-the-art learning-free KWS methods considered here for comparison while evaluated on the present datasets. We also evaluate the performance of the present KWS model using state-of-the-art deep features and it is found that the features used in the present work perform better than the deep features extracted using InceptionV3, VGG19, and DenseNet121 models.This paper proposes a new haptic shared control concept between the human driver and the automation for lane keeping in semi-autonomous vehicles. Based on the principle of human-machine interaction during lane keeping, the level of cooperativeness for completion of driving task is introduced. Using the proposed human-machine cooperative status along with the driver workload, the required level of haptic authority is determined according to the driver’s performance characteristics. Then, a time-varying assistance factor is developed to modulate the assistance torque, which is designed from an integrated driver-in-the-loop vehicle model taking into account the yaw-slip dynamics, the steering dynamics, and the human driver dynamics. To deal with the time-varying nature of both the assistance factor and the vehicle speed involved in the driver-in-the-loop vehicle model, a new ℓ∞ linear parameter varying control technique is proposed. The predefined specifications of the driver-vehicle system are guaranteed using Lyapunov stability theory. The proposed haptic shared control method is validated under various driving tests conducted with high-fidelity simulations. Extensive performance evaluations are performed to highlight the effectiveness of the new method in terms of driver-automation conflict management.In recent years, more and more frameworks have been applied to brain-computer interface technology, and electroencephalogram-based motor imagery (MI-EEG) is developing rapidly. However, it is still a challenge to improve the accuracy of MI-EEG classification. A deep learning framework termed IS-CBAM-convolutional neural network (CNN) is proposed to address the non-stationary nature, the temporal localization of excitation occurrence, and the frequency band distribution characteristics of the MI-EEG signal in this paper. First, according to the logically symmetrical relationship between the C3 and C4 channels, the result of the time-frequency image subtraction (IS) for the MI-EEG signal is used as the input of the classifier. It both reduces the redundancy and increases the feature differences of the input data. Second, the attention module is added to the classifier. A convolutional neural network is built as the base classifier, and information on the temporal location and frequency distribution of MI-EEG signal occurrences are adaptively extracted by introducing the Convolutional Block Attention Module (CBAM).