Activity

  • Mcconnell posted an update 9 months ago

    An optimization model for the hidden node parameters is established to improve the learning performance. Based on the proposed model-driven ELM architecture, a fast and accurate PPF calculation method is proposed. The simulations on the IEEE 57-bus and Polish 2383-bus systems demonstrate the effectiveness of the proposed method.Many statistical learning models hold an assumption that the training data and the future unlabeled data are drawn from the same distribution. However, this assumption is difficult to fulfill in real-world scenarios and creates barriers in reusing existing labels from similar application domains. Transfer Learning is intended to relax this assumption by modeling relationships between domains, and is often applied in deep learning applications to reduce the demand for labeled data and training time. Despite recent advances in exploring deep learning models with visual analytics tools, little work has explored the issue of explaining and diagnosing the knowledge transfer process between deep learning models. In this paper, we present a visual analytics framework for the multi-level exploration of the transfer learning processes when training deep neural networks. Our framework establishes a multi-aspect design to explain how the learned knowledge from the existing model is transferred into the new learning task when training deep neural networks. Based on a comprehensive requirement and task analysis, we employ descriptive visualization with performance measures and detailed inspections of model behaviors from the statistical, instance, feature, and model structure levels. We demonstrate our framework through two case studies on image classification by fine-tuning AlexNets to illustrate how analysts can utilize our framework.The existing neural architecture search (NAS) methods usually restrict the search space to the pre-defined types of block for a fixed macro-architecture. However, this strategy will limit the search space and affect architecture flexibility if block proposal search (BPS) is not considered for NAS. As a result, block structure search is the bottleneck in many previous NAS works. In this work, we propose a new evolutionary algorithm referred to as latency EvoNAS (LEvoNAS) for block structure search, and also incorporate it to the NAS framework by developing a novel two-stage framework referred to as Block Proposal NAS (BP-NAS). Comprehensive experimental results on two computer vision tasks demonstrate the superiority of our newly proposed approach over the state-of-the-art lightweight methods. For the classification task on the ImageNet dataset, our BPN-A is better than 1.0-MobileNetV2 with similar latency, and our BPN-B saves 23.7% latency when compared with 1.4-MobileNetV2 with higher top-1 accuracy. Furthermore, for the object detection task on the COCO dataset, our method achieves significant performance improvement than MobileNetV2, which demonstrates the generalization capability of our newly proposed framework.Graph convolutional networks (GCNs), which generalize CNNs to more generic non-Euclidean structures, have achieved remarkable performance for skeleton-based action recognition. However, there still exist several issues in the previous GCN-based models. First, the topology of the graph is set heuristically and fixed over all the model layers and input data. This may not be suitable for the hierarchy of the GCN model and the diversity of the data in action recognition tasks. Second, the second-order information of the skeleton data, i.e., the length and orientation of the bones, is rarely investigated, which is naturally more informative and discriminative for the human action recognition. In this work, we propose a novel multi-stream attention-enhanced adaptive graph convolutional neural network (MS-AAGCN) for skeleton-based action recognition. The graph topology in our model can be either uniformly or individually learned based on the input data in an end-to-end manner. This data-driven approach increases the flexibility of the model for graph construction and brings more generality to adapt to various data samples. Besides, the proposed adaptive graph convolutional layer is further enhanced by a spatial-temporal-channel attention module, which helps the model pay more attention to important joints, frames and features. Suzetrigine Moreover, the information of both the joints and bones, together with their motion information, are simultaneously modeled in a multi-stream framework, which shows notable improvement for the recognition accuracy. Extensive experiments on the two large-scale datasets, NTU-RGBD and Kinetics-Skeleton, demonstrate that the performance of our model exceeds the state-of-the-art with a significant margin.This paper presents a pulse-stimulus sensor readout circuit for use in cardiovascular disease examinations. The sensor is based on a gold nanoparticle plate with an antibody post-modification. The proposed system utilizes gated pulses to detect the biomarker Cardiac Troponin I in an ionic solution. The characteristic of the electrostatic double-layer capacitor generated by the analyte is related to the concentration of Cardiac Troponin I in the solvent. After sensing by the transistor, a current-to-frequency converter (I-to-F) and delay-line-based time-to-digital converter (TDC) convert the information into a series of digital codes for further analysis. The design is fabricated in a 0.18-μm standard CMOS process. The chip occupies an area of 0.92 mm2 and consumes 125 μW. In the measurements, the proposed circuit achieved a 1.77 Hz/pg-mL sensitivity and 72.43 dB dynamic range.Unsupervised Domain Adaptation (UDA) makes predictions for the target domain data while manual annotations are only available in the source domain. Previous methods minimize the domain discrepancy neglecting the class information, which may lead to misalignment and poor generalization performance. To tackle this issue, this paper proposes Contrastive Adaptation Network (CAN) that optimizes a new metric named Contrastive Domain Discrepancy explicitly modeling the intra-class domain discrepancy and the inter-class domain discrepancy. To optimize CAN, two technical issues need to be addressed 1) the target labels are not available and 2) the conventional mini-batch sampling is imbalanced. Thus we design an alternating update strategy to optimize both the target label estimations and the feature representations. Moreover, we develop class-aware sampling to enable more efficient and effective training. Our framework can be generally applied to the single-source and multi-source domain adaptation scenarios. In particular, to deal with multiple source domain data, we propose 1) multi-source clustering ensemble which exploits the complementary knowledge of distinct source domains to make more accurate and robust target label estimations, and 2) boundary-sensitive alignment to make the decision boundary better fitted to the target.

Skip to toolbar