Keynote Speakers
Technical
Program
Call for Papers
Aims and Scope
Important Dates
Organizers
Program Committee
Submission
Contacts
VECTaR2013
VECTaR2012
VECTaR2011
VECTaR2010
VECTaR2009
|
Keynote Speakers
Prof. Dimitris N. Metaxas
Computer
Science, Rutgers University, USA
Biography
http://www.cs.rutgers.edu/~dnm/
Technical
Program
20 minutes per Oral
13:20 Begin
with Chair 10minutes talk
13:30-13:50
ID3: Learning spatio-temporal features for action
recognition with modified hidden conditional random field
13:50-14:10
ID1: Camera Calibration and Shape Recovery from videos of Two Mirrors
14:10-14:30
ID2: Efficient Online Spatio-Temporal Filtering for
Video Event Detection
14:30-14:50
ID6: Grading Tai Chi Chuan Performance in
Competition with RGBD sensors
Break
15:15-16:00
Keynote Speech: Dimitris
N. Metaxas, Computer Science, Rutgers University
16:00-16:20
ID4: Activity Recognition from Still Images with Transductive
Non-negative Matrix Factorization
16:20-16:40
ID7: Human Action Recognition by Random Features and Hand-Crafted Features: A
Comparative Study
16:40-17:00
ID5: Mode-driven Volume Analysis based on Correlation of Time Series
17:00-17:20
ID8: Modeling Supporting Regions for Close Human Interaction Recognition
17:20 Chair
wrap up
End
Call
for Papers
The goal of
this workshop is to provide a forum for recent research advances in the area of
video event categorization, tagging and retrieval, particularly with the
increasing BIG volume of video data. The workshop seeks original high-quality
submissions from leading researchers and practitioners in academia as well as
industry, dealing with theories, applications and databases of visual event
recognition. Topics of interest include, but are not limited to:
- Big
video event database gathering and annotation
- A
large scale dataset benchmarking
- Deep
learning for large scale event recognition
- Event
detection in big social media
- Event
recognition with depth cameras
- Multi-modal
and multi-dimensional event recognition
- Multi-spectrum
data fusion
- Spatial
temporal features for event categorization
- Hierarchical
event recognition
- Probabilistic
graph models for event reasoning
- Global/local
event descriptors
- Metadata
construction for event recognition
- Event-based
video segmentation and summarization
- Efficient
indexing and concepts modeling for video event retrieval
- Semantic-based
video event retrieval
- Online
video event tagging
We
will select the paper with excellent technical contributions and broad impact
for Best Paper Award.
Aims and Scope
With the vast development of Internet
capacity and speed, as well as wide usage of media technologies in people's
daily life, it is highly demanding to efficiently process or organize video
events rapidly emerged from the Internet (e.g., YouTube), wider surveillance
networks, mobile devices, smart cameras, depth cameras (e.g., kinect)etc. The human visual perception system could,
without difficulty, interpret and recognize thousands of events in videos,
despite high level of video object clutters, different types of scene
context, variability of motion scales, appearance changes, occlusions and
object interactions. For a computer vision system, it has been very
challenging to achieve automatic video event understanding for decades.
Broadly speaking, those challenges include robust detection of events under
motion clutters, event interpretation under complex scenes, multi-level
semantic event inference, putting events in context and multiple cameras,
event inference from object interactions, etc.
In recent years, steady progress has been made towards better models for
video event categorization and recognition, e.g., from modeling events with
bag of spatial temporal features to discovering event context, from detecting
events using a single camera to inferring events through a distributed camera
network, and from low-level event feature extraction and description to
high-level semantic event classification and recognition. However, the
current progress in video event analysis is still far from its promise. It is
still very difficult to retrieve or categorize a specific video segment based
on their content in a real multimedia system or in surveillance applications.
The existing techniques are usually tested on simplified scenarios, such as
the KTH dataset, and real-life applications are much more challenging and
require special attention. To advance the progress further, we must adapt
recent or existing approaches to find new solutions for intelligent large
scale video event understanding.
- Paper Submission:
June 20, 2014 July 2, 2014
- Notification of
acceptance: July 18 , 2014
- Pre-Paper for USB
Stick: July 25, 2014
- Camera-Ready paper
due: Aug 8 2014
- Workshop: 6th
September 2014
- Prof. Thomas S. Huang,
University of Illinois at Urbana-Champaign, USA
- Prof. Tieniu Tan, Chinese Academy of Sciences, China
Program Chairs
- Dr. Yun Raymond Fu, Northeastern
University, Boston, USA
- Dr. Ling Shao, The
University of Sheffield, UK
- Dr. Jianguo Zhang, University of Dundee, UK
- Dr. Liang Wang, Chinese
Academy of Sciences, China
- Aggelos K. Katsaggelos, Northwestern University, USA
- Graeme
Jones, Kingston University, UK
- Shiguang Shan, Chinese Academy
of Sciences, China
- Charles
Dyer, University of Wisconsin - Madison, USA
- Junsong Yuan, Nanyang Technological University, Singapore
- Baoxin Li, Arizona State
University, USA
- Gian Luca Foresti, University of Genoa, Italy
- Avinash Kak,
Purdue University, USA
- Sungjoo Suh,
Samsung, South Korea
- Zicheng Liu, Microsoft
Research, USA
- Fatih Porikli,
Australian National University, Australia
- Xueming Qian, Xi'an Jiaotong University, China
- Rama
Chellappa, University of Maryland, USA
- Lucio
Marcenaro, University of Genoa, Italy
- Ming
Shao, Northeastern University, USA
- Vittorio
Murino, Istituto Italiano di Tecnologia
& University of Verona, Italy
- Jinjun Wang, Xi'an Jiaotong University, China
- When submitting
manuscripts to this workshop, the authors acknowledge that manuscripts
substantially similar in content have NOT been submitted to another conference,
workshop, or journal. However, dual submission to the ECCV 2014 main
conference and VECTaR'14 is allowed.
- The format of a paper
submission is the same as the ECCV main conference. Please follow
instructions on the ECCV 2014 website
http://eccv2014.org/author-instructions/.
- For the paper
submission, please go to the Submission Website (https://cmt.research.microsoft.com/VECTAR2014/)
Each submission will be reviewed by at least three reviewers from program
committee members and external reviewers for originality, significance,
clarity, soundness, relevance and technical contents. Accepted papers will be
published together with the proceedings of ECCV 2014.
|