reviews_transnets

Dear Rose,

Congratulations! On behalf of the Program Committee, we are delighted to inform you that your submission:

Paper#: 48

Title: TransNets: Learning to Transform for Recommendation

has been accepted by the RecSys 2017 conference as a long paper for oral presentation.

As usual, RecSys is a very competitive forum. This year, RecSys received a total of 247 submissions and accepted 26 long papers (acceptance rate 20.8%) and 20 short papers (acceptance rate 16.4%). The review process was very selective due to the high number of submissions. The Program Committee worked very hard to ensure that every paper got at least three reviews. All papers, particularly those at the borderline, were also reviewed and thoroughly discussed by a Senior Program Committee member and by the PC Chairs.

The reviews for your paper are attached to this e-mail. Please carefully read them and revise your paper accordingly. Please also proofread your paper before submission to check for typos and grammatical errors.

A. The instructions for camera-ready-copy preparation are available at:

http://www.sheridanprinting.com/typedept/recsys.htm

Kindly review the templates and formatting guidelines. Please familiarize yourselves with the formatting and review in general for the helpful formatting hints included.

B. The submission deadline for the submission of your camera-ready version is ***July 7th, 2017***. No extensions will be given. Kindly attend to your submission early if you have travel planned near the submission deadline. Papers and abstracts not submitted on time will be dropped from the proceedings.

C. You will receive two new emails shortly:

C1. One email from ACM Rightsreview (rightsreview@acm.org) with a link to the electronic ACM copyright-permission form(s) to be completed. Upon completing the electronic form, you will receive a confirmation email. Contained within the confirmation email will be the ACM copyright-permission block text, conference data, and DOI string/url specific for your submission and mandatory to appear on the first page. See formatting help in item #1 on how to include this text and minimize the space impact on the first page.

C2. One email from Sheridan Communications (acm@sheridanprinting.com). This email will contain your assigned submission id#, link to the formatting instructions (same as item A above), and a unique-link to submit your final publication ready version on or before the submission deadline.

Let us kindly remind you that at least one author for each accepted paper is required to register by **July 7th** in order for the paper to be included in the proceedings.

Thank you again for submitting to RecSys 2017, and we look forward to seeing you and your co-authors at the RecSys 2017 conference in Como!

Best regards,

Shlomo Berkovsky and Alex Tuzhilin

ACM RecSys 2017 Program Chairs

----------------------- REVIEW 1 ---------------------

PAPER: 48

TITLE: TransNets: Learning to Transform for Recommendation

AUTHORS: Rose Catherine and William Cohen

Relevance to RecSys: 5 (excellent)

Novelty: 4 (good)

Technical Quality: 4 (good)

Evaluation: 3 (fair)

Presentation and Readability: 5 (excellent)

Overall evaluation: 4 (weak accept)

Reviewer's confidence/expertise: 3 ((medium))

----------- Overall evaluation -----------

- It is an extension of DeepConn concept. So it is not really innovative.

- The results however prove the improvement of the MSE metric. Other metrics have not been consider

ed.

----------------------- REVIEW 2 ---------------------

PAPER: 48

TITLE: TransNets: Learning to Transform for Recommendation

AUTHORS: Rose Catherine and William Cohen

Relevance to RecSys: 5 (excellent)

Novelty: 4 (good)

Technical Quality: 4 (good)

Evaluation: 4 (good)

Presentation and Readability: 5 (excellent)

Overall evaluation: 5 (strong accept)

Reviewer's confidence/expertise: 4 ((high))

----------- Overall evaluation -----------

The paper presents a deep learning -based approach for using review text for improving rating prediction. This work specifically aims to address the problem in another recent work [44], that is, the inaccessibility of user-item review text at testing time. The authors proposed to train two networks, source and target, based on which a transformation layer is learned to approximate the user-item review text at the testing time. This shows as the key novelty in this work.

Most building blocks in this paper are leveraged from existing work of deep neural networks. And the interconnections and technical details are presented clearly. The implementation based on TensorFlow is straightforward and sound. The experiments across several datasets demonstrate the competing performance of the proposed approach, especially for the comparison against DeepCoNN [44].

Some minor comments:

1) The authors should discuss how this approach can be used in a larger scope, meaning beyond the review domain.

2) It would be more interesting to evaluate the cold start cases. For example, how it performs for users/items with little review text, or users/items with amounts of reviews but limited rating profile.

----------------------- REVIEW 3 ---------------------

PAPER: 48

TITLE: TransNets: Learning to Transform for Recommendation

AUTHORS: Rose Catherine and William Cohen

Relevance to RecSys: 5 (excellent)

Novelty: 4 (good)

Technical Quality: 4 (good)

Evaluation: 3 (fair)

Presentation and Readability: 3 (fair)

Overall evaluation: 4 (weak accept)

Reviewer's confidence/expertise: 3 ((medium))

----------- Overall evaluation -----------

This paper introduces a neural model to predict ratings of users over product reviews.

In general I think this is a good paper, with clear contributions, but also with some

weaknesses. More details below.

Motivation:

The motivation for developing this model is interesting, which is improving limitations

of the state-of-the-art DeepCoNN model. The weakness of DeepCoNN is that it produces

good predictions of ratings over user-item pairs when it has observed (in training and

testing time) the actual review. The contributions in terms of the model proposed are

well presented.

Technical Soundness of Method and solution:

The solution proposed, TransNets, is explained in enough detail to be reproduced.

The only weak point on this section is the lack of details about one of the

baselines, the MF based on a neural network. More details are needed for replication.

Related Work:

- This section seems to be appropriately covered, I am not an expert in the topic of

neural nets for recsys, but I was familiar with some of the works mentioned and

I got accounted of a few more after reading it.

Datasets:

The datasets are appropriate for the task proposed.

Evaluation:

- The results of MSE show the potential of the method proposed, compared to the state of the art on several datasets. However, the evaluation has room for improvement:

-- Lack of statistical tests.

-- The analysis to pick the number of layer is not convincing. Authors argument that 2 layers can describe many logical operations (such as XOR), but the same can be done with 4 and 5 layers, so the behavior of the model worsening (with 4 layers) and then improving (with 5 layers) is not well addressed.

-- The only metric used to evaluate is MSE, and other usual ranking metrics are not used.

No further description on how this model affects MAP, nDCG, recall@k or diversity.

-- A nice point in the evaluation was using samples of generated reviews, but they

only show examples of good reviews generated, but what about reviews generated

with some problems or errors? A seminal paper on automatic image captioning

shows examples of good and bad predictions (https://arxiv.org/abs/1609.06647), which might help on understanding

when and how these models work and fail.

Other comments:

- The title suggests a model that works in more general contexts or tasks (Learning to Transform for Recommendation),

but it is only used for the task or rating prediction. The examples show at the end as "qualitative evaluation"

are just examples of cases where the system works well, but without the counterpart of showing examples where the

model does not work well, we can't really tell.

------------------------- METAREVIEW ------------------------

PAPER: 48

TITLE: TransNets: Learning to Transform for Recommendation

RECOMMENDATION: accept

The paper presents a well-motivated extension of the DeepCoNN approach that leads to significant performance improvements. In general, it is a well-executed piece of research work and the experimental evaluation methodology is reasonable. One item that is not clear, is related to the details of the MF approach that was compared against and if that approach relies entirely on the ratings and/or uses the text of the reviews as well.