ETANet: An Efficient Triple-Attention Network for Salient Object Detection

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Citations (Scopus)

Abstract

Salient object detection (SOD) is a critical vision task in ubiquitous applications. Most existing methods have complicated structure and large number of parameters, which prevents these methods to deploy on practical applications. In order to solve this problem, we propose an efficient triple attention network (ETANet), which consists of multiple attention mechanisms. In detail, we design a crossed spatial-channel attention mechanism to extract useful low-level features, an efficient branch to perceive high-level features based on self-attention through multi-scale receptive field. In addition, we propose a dilated criss-cross fusion mechanism to fuse low-level and high-level features in an efficient way. The experiment results show that our architecture achieved competitive performance and can trade off between the accuracy and efficiency compared to other heavy-weight methods.

Original languageEnglish
Title of host publication37th International Conference on Information Networking, ICOIN 2023
PublisherIEEE Computer Society
Pages271-276
Number of pages6
ISBN (Electronic)9781665462686
DOIs
Publication statusPublished - 2023
Event37th International Conference on Information Networking, ICOIN 2023 - Bangkok, Thailand
Duration: 11 Jan 202314 Jan 2023

Publication series

NameInternational Conference on Information Networking
Volume2023-January
ISSN (Print)1976-7684

Conference

Conference37th International Conference on Information Networking, ICOIN 2023
Country/TerritoryThailand
CityBangkok
Period11/01/2314/01/23

Bibliographical note

Publisher Copyright:
© 2023 IEEE.

Keywords

  • attention mechanism
  • receptive field block
  • salient object detection

Fingerprint

Dive into the research topics of 'ETANet: An Efficient Triple-Attention Network for Salient Object Detection'. Together they form a unique fingerprint.

Cite this