Jose-Luis Matez-Bandera1, Alberto Jaenal2, Clara Gomez2, Alejandra C. Hernandez2, Javier Monroy1, José Araújo2 and Javier Gonzalez-Jimenez1
1 Machine Perception and Intelligent Robotics (MAPIR) Group,
Malaga Institute for Mechatronics Engineering and Cyber-Physical Systems (IMECH.UMA).
University of Malaga. Spain.
2 Ericsson Research, Stockholm, Sweden.
Installation Usage Data Structure Citation
CoplaMatch requires hloc as the Visual Localization framework.
1. Environment Setup First, ensure you have Anaconda installed. If not, download the installer from Anaconda.
# Example for Linux
bash ~/Downloads/Anaconda3-2023.07-Linux-x86_64.sh2. Clone Repository
git clone --recurse-submodules git@github.com:EricssonResearch/copla-match.git
cd coplamatch
git submodule update --init --recursive3. Install Dependencies
# Install hloc
cd hloc
python -m pip install -e .
# Install dependencies with specific version requirements
CMAKE_ARGS="-DOPENCV_ENABLE_NONFREE=ON" pip install -v --no-binary=opencv-contrib-python --force-reinstall opencv-contrib-python
pip install numpy==1.21 imageioYou are now ready to execute the Jupyter notebooks or scripts.
To test CoplaMatch, you can use the TUMRGBD data provided in the repo:
cd datasets/
unzip -q tum_testing.zip
cd ..Reconstruct the map using the following command:
python3 do_reconstruction.py \
--dataset_dir datasets/tum_testing/ \
--feature_type cvSIFT_cvBRIEF \
--matching_type NN-lowe \
--plane_model \
--has_input_modelLocalize a query image against the built map. You can run CoplaMatch using either the Python or C++ implementation.
Option A: Using Python implementation
python3 do_query.py \
--dataset_dir datasets/tum_testing/ \
--model_dir outputs/tum_testing/model_cvSIFT_cvBRIEF_NN-lowe_masked-planes/ \
--localization_sequence localization \
--feature_type cvORBdense_cvBRIEF \
--matching_type coplanar \
--number_of_queries 5 \
--localization_type netvlad \
--retrieval_top_k 10 \
--plane_query \
--show \
--has_gt \
--return_matchesOption B: Using C++ implementation
python3 do_query.py \
--dataset_dir datasets/tum_testing/ \
--model_dir outputs/tum_testing/model_cvSIFT_cvBRIEF_NN-lowe_masked-planes/ \
--localization_sequence localization \
--feature_type cvORBdense_cvBRIEF \
--matching_type coplanar_cplusplus \
--number_of_queries 5 \
--localization_type netvlad \
--retrieval_top_k 10 \
--plane_query \
--show \
--has_gt \
--return_matchesAlternative: SuperPoint/SuperGlue You can also use SuperPoint as a descriptor with cross-detectors and SuperGlue as the matcher:
python3 do_reconstruction.py \
--dataset_dir datasets/tum_testing/ \
--feature_type cvSIFT_SuperPoint \
--matching_type superglue \
--plane_model \
--has_input_modelThe required file structure for inputs and outputs is detailed below.
[Click to expand]
dataset/ # Dataset root path
├── mapping/ # Information to build the model
│ ├── images/ # Images to build the model
│ │ ├── image01.jpeg # Note: Ensure sufficient 0-padding
│ │ ├── ...
│ │ └── image92.jpeg
│ │
│ ├── input_model/ # Model for triangulation
│ │ ├── cameras.txt # COLMAP format: Camera_num model params
│ │ │ # Ex: 1 SIMPLE_PINHOLE 720 480 415.6 360 240
│ │ │
│ │ ├── images.txt # COLMAP format: ID qw qx qy qz tx ty tz CamID Name
│ │ │ # Ex: 1 0.49... 24.58... 1 image_0000.jpeg
│ │ │
│ │ └── points3D.txt # Empty file (required existence)
│ │
│ └── masks/ # (Optional) For masked models (e.g., planes)
│ ├── image01.jpeg.png # Must share name with image + .png
│ └── ...
│
└── <localization_name>/ # Information to perform visual localization
├── images/ # Query images
│ ├── query01.jpeg # Ensure names differ from model images
│ └── ...
│
├── images.txt # Query poses in COLMAP format
│
├── cameras.txt # Query camera parameters in COLMAP format
│
└── masks/ # (Optional) Masks for localization
├── query01.jpeg.png
└── ...
[Click to expand]
dataset/
└── <model_name>_<specs>/
├── features...h5
├── matches...h5
├── pairs...txt
├── sfm_model/
│ ├── cameras.bin
│ ├── images.bin
│ ├── database.db
│ └── points3D.bin
│
├── <localization_sequence_name>_<specs>/
│ ├── features...h5
│ ├── global_features...h5
│ ├── pairs...txt
│ └── results...json
│
└── <localization_sequence_name2>_<specs>/
└── ...
If you use any of the tools provided here, please cite the CoplaMatch paper:
@ARTICLE{matez_coplamatch,
title={Cross-Detector Visual Localization with Coplanarity Constraints for Indoor Environments},
author={Jose-Luis Matez-Bandera and Alberto Jaenal and Clara Gomez and Alejandra C. Hernandez and Javier Monroy and José Araújo and Javier Gonzalez-Jimenez},
journal={Sensors},
volume={25},
year={2025},
number={24},
article-number={7593},
url={[https://www.mdpi.com/1424-8220/25/24/7593](https://www.mdpi.com/1424-8220/25/24/7593)},
doi={10.3390/s25247593}
}