Lijie Chen

Lijie Chen

@wjmzbmr1

Followers2.1K
Following123

PhD student in MIT EECS

Cambridge, MA
Joined on January 19, 2016
Statistics

We looked inside some of the tweets by @wjmzbmr1 and here's what we found interesting.

Inside 100 Tweets

Time between tweets:
5 days
Average replies
2
Average retweets
16
Average likes
76
Tweets with photos
5 / 100
Tweets with videos
0 / 100
Tweets with links
0 / 100

Quoted @cstheory

TR20-148 | Simple and fast derandomization from very hard functions: Eliminating randomness at almost no cost | Roei Tell, Lijie Chen https://t.co/NYqVnXOTtE

How fast can you derandomize a T(n)-time algorithm? In this work, Roei and I show that under some plausible conjectures, one can achieve an (n T(n))-time derandomization, and this is optimal assuming the Nondeterministic Strong Exponential-Time Hypothesis (NSETH). https://t.co/omdJ50HFGs

Complementary to the recent result by Cheu-Ullman, (they proved tight bounds for selection in the robust shuffle model with unbounded messages) We also obtain some better sample lower bounds when the number of messages is not so big and without robustness.

The proof of the above result builds on the n/log n lower bound for CountDisticnt in the property testing model (estimating the unseen), generalizing their method to bound the TV distance between mixtures of multi-dimensional Poisson distributions using moment-matching.

CountDisticnt: n users each holding one element and want to privately figure out how many distinct elements they have. Surprisingly, we show that (ln n-ln ln n)-local DP algorithms have n/polylog(n) error, while there is an (ln n)-local DP algorithm with sqrt(n) error.

Quoted @cstheory

On Distributed Differential Privacy and Counting Distinct Elements https://t.co/oiCTAjRosI

An interesting work I did with my amazing coauthors (Badih, Ravi, and Pasin) on the (non-interactive) local and shuffle model of DP! TL;DR: CountDisticnt is extremely hard for local and 1-message shuffle DP, but surprisingly easy for shuffle DP with 1/2+o(1) expected messages. https://t.co/RDGXFkrNrB

I was working on this model during my summer internship at Google, and was caught by this amazing problem but couldn't figure out how to deal with an unbounded number of messages. Very happy to see this being resolved!

Quoted @cstheory

The Limits of Pan Privacy and Shuffle Privacy for Learning and Estimation https://t.co/N4eUUug4s7

Very impressive! Resolved the multi-message shuffle complexity of selection/parity-learning without any restriction on the number of messages! (previously even the two-message case is open) https://t.co/oY0mNP1RRC

Quoted @JukkaSuomela

HALG 2020 videos: https://t.co/uPlWImWQlC

More things to watch! (The surveys look quite nice, for instance: https://t.co/al3q6OyUgR) https://t.co/Lv5bdMVkvL

Quoted @cstheory

TR20-127 | $k$-Forrelation Optimally Separates Quantum and Classical Query Complexity | Nikhil Bansal, Makrand Sinha https://t.co/P5pCJuj0JS

Wow! This is a big breakthrough in quantum query complexity! https://t.co/py8NIwQuaI

Quoted @cstheory

TR20-126 | Indistinguishability Obfuscation from Well-Founded Assumptions | Aayush Jain, Huijia Lin, Amit Sahai https://t.co/QZ3CRl7S8u

Wow, looks like a groundbreaking work https://t.co/CRiyP2v6ES

TR20-126 | Indistinguishability Obfuscation from Well-Founded Assumptions | Aayush Jain, Huijia Lin, Amit Sahai https://t.co/QZ3CRl7S8u

Found some very nice online lecture notes about proving information-theoretical lower bounds🥳https://t.co/lbFJKekN5Q

Quoted @thegautamkamath

Rahul Ilango's year in best student papers (all solo): - ITCS 2020 - CCC 2020 - FOCS 2020 All on the challenging MSCP. Did I mention he just finished his first-year as a grad student? @rrwilliams must be a proud advisor!

wow! https://t.co/L8kOhrNRBH

My CS Theory Toolkit course is finished. You can find the complete playlist of 99 videos here: https://t.co/70rqafhX9t

Cool result! If there is a univariate degree-d f s.t. any representation f = sum c_i (f_i)^2 requires ≥d^0.5001 monomials, then the algebraic version of P≠NP holds. Proof builds on classic https://t.co/LkW4jQtFCa https://t.co/7wnMlW37bw

Tom Gur
3 months ago

Avishay Tal and Prasad Raghavendra are organising an in-depth lecture series on Advances in Boolean Function Analysis. The list of speakers is extraordinary! The first talk is on Wednesday, July 15, 10am PDT. Dor Minzer will talk about the Fourier-Entropy Influence Conjecture.

A video of the STOC workshop on "MCSP and Hardness Magnification" can be found below. Many thanks to the speakers, as well as Josh and Marco for organizing! https://t.co/wkosCw9SUk

InstaHide: trains deep nets on encrypted data only. Very fast,  preserves privacy of user data, small accuracy loss (unlike differential privacy). https://t.co/4S3VUr3Lsg https://t.co/yj8EOxtVVJ

InstaHide: trains deep nets on encrypted data only. Very fast, preserves privacy of user data, small accuracy loss (unlike differential privacy). https://t.co/4S3VUr3Lsg https://t.co/yj8EOxtVVJ

In the summer there will be an online complexity seminar hosted via zoom! https://t.co/ib5D7I59U2 The first talk (6/18 noon EST) is by Rahul Ilango about recent results on depth-d Boolean formulas obtained by a form of lifting and applications to MCSP and lower bounds.

If this conference version is too short for you, there is another 1-hour long talk for the same paper in IAS :) https://t.co/XoH31ZzJmp

Next Page