Certificate of participant of YSIP-2019 conference
Purpose: The purpose of this article is to develop a method for calculating the orientation of a car in a photograph from a DVR. This problem arises when designing autonomous vehicles in an urban environment.
Design/Methodology/Approach: The solution to this problem is based on the use of a neural network, the training of which is based on data consisting of 4262 pre-marked photographs provided by the Kaggle community.
Originality/Value: For three angle coordinates regression we employ the VGGnet with all fully-connected layers removed and the last MaxPooling2D output inputed to three parallel regression branch group each contained two branches for computing the confidence for each angle bin and fully regressed angular correction. For three angular coordinates regression we utilize VGGnet16 with all FC layers removed, and the last MaxPooling2D output layer is fedded to inputs of the three parallel regressive groups contained two branches for appropriate angle bin confidence estimation (softmax classification branch) and also fully regressive additional cos(∆θ) and sin(∆θ) angular corrections computing relative to certain bin. Since the current task didn’t require direct 3D car boxes location estimation on the road, the structure of our neural network didn’t have a regression block on spatial dimensions of 3d bounding box for a car, used for similar algorithm testing on such datasets as KITTI и Pascal3D+.
Gordeev A.Y., Klyachin V.A. (2021) Determination of the Spatial Position of Cars on the Road Using Data from a Camera or DVR. In: Popkova E.G., Sergi B.S. (eds) "Smart Technologies" for Society, State and Economy. ISC 2020. Lecture Notes in Networks and Systems, vol 155. Springer, Cham. https://doi.org/10.1007/978-3-030-59126-7_20.
This article examines the problem of object recognition in photographs. Objects of historical or cultural significance were selected as objects. In the work, the research was limited to two objects. It is assumed that the pictures can be taken at different times of the day and from different angles. Based on this, for one approach to solving the problem, it is proposed to extract the contours on the image of the object and calculate its characteristics. The moments of the curves of the contours up to the third order inclusive, the central moments, the normalized moments, and also the invariant moments of Hu are used as numerical values. In addition to this, the integral of the total curvature of the contour curve is calculated, the geometric meaning of which is that this integral is equal to the total variation of the slope of the tangent line of curve. The second approach is based on splitting the original image into rectangular cells and calculating the same moments as for the contours, but for the brightness function. Finally, in the third approach, the original image is replaced by the image of the outlines themselves. The article obtained results for several machine learning models, including convolutional neural networks, the method of nearest neighbors, and also the gradient boosting method is used to improve the results. #CSOC1120.
Klyachin V., Klyachin A. (2021) Pairwise Separability in the Problem of Identifying Objects in Photographs Using the Example of Cultural and Historical Architectural Buildings. In: Silhavy R. (eds) Artificial Intelligence in Intelligent Systems. CSOC 2021. Lecture Notes in Networks and Systems, vol 229. Springer, Cham. https://doi.org/10.1007/978-3-030-77445-5_25
Coursera: Certificate of Stanford University's Machine Learning Course Completion of Aleksey Gordeev
This paper describes the autonomous mobile search robot equipped with AI that currently being developed and the results obtained so far during this development process. We describe the theoretical concepts which are utilized as basis of robot system, algorithms used for motion controlling and data processing, implemented hardware and features of software implementation of the applied algorithms. The features of the developed robot are following. At first, it is employment of two lidars, the laser scanning data from which are combined into a single point cloud. Then we used a deep convolutional neural network (DCNN) for certain appropriate objects detection and recognition as well as Dlib tracker for such objects tracking after detection. Besides that, our robot can search for objects under the low light conditions because of usage of IMX219 camera from Sony with additional IR LED system. An NVIDIA Jetson Nano single-board computer was used as the main computational and control unit of the system as well as another board OrangePi PC was utilized for point clouds from two lidars processing. As for the methods for moving control we’ve implemented relatively computationally simple system based on Fuzzy Logic and Google Cartographer system using for SLAM. We have also applied A-star algorithm for better obstacles avoidance. Some functional schemes and additional description are provided in the article for illustration of building blocks of developed ROS based program system for robot location and mapping, moving control and object detection and recognition.
Gordeev A., Klyachin V., Kurbanov E., Driaba A. (2020). Autonomous Mobile Robot with AI Based on Jetson Nano. In: Arai K., Kapoor S., Bhatia R. (eds) Proceedings of the Future Technologies Conference (FTC) 2020, Volume 1. FTC 2020. Advances in Intelligent Systems and Computing, vol 1288. Springer, Cham. https://doi.org/10.1007/978-3-030-63128-4_15
The certificate of participant of Future Technology Conference 2021.
The certificate of participant of Future Technology Conference 2020.
This paper investigates the problem of finding the geometric location of architectural structures by its flat image. The first part provides a theoretical substantiation of the method based on the search for a system of three quadruples of points with certain properties. In this case, formulas and equations are derived that allow one to determine the desired set of object orientation parameters. An illustrative example is given and the calculation error is estimated. In the second part of the work, a method based on the same idea, but allowing to automate calculations, is considered Namely, the straight line segments necessary for calculations change into a set of features that can approximately describe the sought segments. Computational experiments show that this approach will have an error no worse than when manually finding the required points on the photograph.
Alexei K., Vladimir K. (2021) Determination of the Spatial Orientation of an Architectural Object from a Photograph. In: Silhavy R. (eds) Informatics and Cybernetics in Intelligent Systems. CSOC 2021. Lecture Notes in Networks and Systems, vol 228. Springer, Cham. https://doi.org/10.1007/978-3-030-77448-6_37