In cooperative perception, reliable detection and localization of surrounding objects and communicating the information between vehicles is necessary for safety. However, V2V transmission of huge datasets or images can be computationally expensive and pose bandwidth issue often making real-time implementation non-feasible. An efficient and robust approach for ensuring such cooperation can be achieved by relative pose estimation between two vehicles sharing a common field of view. Especially when an object is not in the field of view of an ego vehicle, dynamically detecting the object and transferring its location in real-time to the ego vehicle is necessary. In such scenarios, reliable and robust pose recovery at each instant ensures accurate trajectory estimation by the ego vehicle. In our current study, pose recovery is achieved through common visual features present in a pair of images. Traditionally, algorithms like SIFT and KAZE have been used to detect and match features between an image pair from sensors looking from a different perspective. However, recently with the advent of binary detection and description algorithms like ORB and AKAZE, we have decided to analyze and show a comparative study on the efficacy and robustness of such methods for feature matching. The performance metrics for each method are decided based on total detected features, the number of good matches, and the computation time. The current study also tests the performance of each method under varying degrees of angular orientation and camera exposure setting, which can be helpful for motion estimation under dim light. Overall, AKAZE was computationally faster among all, while ORB and SIFT fared equally on other parameters. The corresponding research can be a precursor to future trajectory prediction of dynamic objects by the ego vehicle when there is a sudden loss of communication with the lead vehicle after the initial data transfer.