2017 Sift Engineering in Review

Toshi KurehaDecember 31, 2017

2017 has been a pivotal year for Sift Science and the engineering team.  We’ve delivered on amazing product launches, technological advancements, and added strong talent in multiple geographic locations.

I wanted to take a moment and highlight a number of these accomplishments and set the stage for what we hope to accomplish in 2018.


From a product perspective, we’ve launched a wealth of amazing products & features in 2017.  From our new Account Takeover Prevention product, to Smart Collaboration Queues, to completely revamped Integration Health reporting, to fighting fraud rings with a single click, to renewing our SOC 2 Type 2 Security Certification, to launching our mobile solution, the wealth of new product offerings and enhancements that we have delivered this year and the reception we have received have been something we are very proud of, which reflects in the bottom line, record-breaking revenue and customer growth this year for Sift Science following our Series C funding last year.


On the technology side, 2017 has been a transformative year.  We have completely revamped the provisioning & management of our cloud infrastructure with Salt & Terraform at the same time expanding our cloud infrastructure size by 6x to keep up with our customer and revenue growth.  We have also put in place additional redundant systems in 3 data centers, employed circuit-breaking technology such as Hystrix, and have achieved a big improvement in our uptime & lowered latency by over 50% through both hardware optimization and parallelization of our online scoring pathways.

Running & analyzing thousands of machine learning models is what we do, and this was also the year when we implemented Airflow to manage our increasingly sophisticated machine learning pipeline & workflow.  Speaking of machine learning, the core of our machine learning has always been our global network of data and the ensemble of “standard” machine learning techniques such as Naive Bayes, Decision Forests, Logistic Regression, and N-Gram analysis.  This year we have put Deep Learning into production to further extend our edge in accuracy through the use of LSTM running on TensorFlow, achieving 5% ~ 15% reduction in error for some of our largest customers.  And to ensure our customers’ automations are not negatively affected as we deploy new model changes, we have deployed our score calibration system that ensures thresholds set by customers remain to work as intended

We also held two week-long hackathons this year, and each year we see amazing results from the team; some of our now revenue generating products like Account Takeover Prevention product and key technology implemented in production like Deep Learning started out as hackathon projects. 

Many of these great accomplishments deserve their own blog posts, and we hope to share our learnings on how we have successfully incorporated various technologies & innovation at Sift Science to production in the not too distant future. 


All this was not possible back in 2016 when we only had 15 engineers.  In 2017, not only has the engineering team nearly tripled in size, we have also opened our new Sift Science engineering office in Seattle, and that team is now nearly as large as Sift engineering team was in 2016.  We have hired engineers from various backgrounds – from fresh out-of-school to senior engineers with 10+ years of experience, from folks coming from large established companies like Amazon & Microsoft to small innovative startups, from SREs to full-stack machine learning engineers, the team has grown in number as well as added to the diversity of skillset and background. 

As part of this growth, the engineering team is now organized into either long-running (1) full-stack feature teams where it made sense to optimize for product/feature delivery or (2) functional teams where it made sense to have a center of excellence & deep specialization.  We have made explicit choices in what we optimize for and put in countermeasures to tradeoffs we were making (i.e. ensuring we don’t have 3 different front-end best practices as we scale). We have also continued to have great interns this year – in fact, it’s been so popular & successful we are already at capacity for the 2018 internship program even with increased capacity for the upcoming summer.


Sifties giving tech talks

We have always been a proponent of leveraging innovation from the community but also believe in giving & sharing back with the community.  To that end, we have held multiple Turn-Up-The-Bayes ML talks in San Francisco as well as multiple meetups in Seattle, talking about how we do Applied Machine Learning at Scale.  We have also given tech talks at community-led events such as Global Big Data Conference, 2017 AI Download, HBaseCon, CTO World Congress, Collision, and more, topics ranging from how we do feature learning at scale, to running a highly scalable and reliable machine learning infrastructure, to how startups need to evolve its culture, process, and teams as they grow from early stage to mid-stage experiencing rapid growth.  I’m also proud of our efforts to promote diversity, and this summer we hosted a female engineering-driven discussion on today’s challenges.  We also like to share & discuss ideas more informally – we made a team trip to NIPS in Long Beach in December, learning and exchanging ideas with the great community there.


Looking forward to 2018, we have some very exciting new products and enhancements that I can barely keep it under wraps… stay tuned on how Sift further solidifies its leadership position.

On the technology side, I can and want to share a glimpse into what we’re doing in 2018.  Here at Sift, we believe that the key to success in scaling out the engineering organization is enabling truly autonomous, aligned, right-sized teams.  To achieve that, in addition to the right culture & process, we need complementary technology & architecture.  To that end, we’re currently in the process of transforming our complex large system into a service mesh and leveraging technology such as Envoy & GRPC, along with complementary technology such as Kubernetes for our deployment and production orchestration infrastructure, to enable strong & more autonomous team ownership.  

On the data-storage side, we will soon complete the deployment of our new multi-level storage system to production that optimizes the underlying storage based on latency needs.  On the machine learning side, there is an endless list of enhancements beyond Deep Learning that we believe keeps our competitive edge as sharp as ever – leveraging both labeled and unlabeled data at scale, creating regional clusters, custom customer-specific non-linear ensembles, better device identification, and more.  Finally, as our data science needs increase, we are looking at taking some of our prototype work on AWS Athena, Redshift (Spectrum), as well as EMR and applying them to various use cases we have.

Sam Altman, president of Y Combinator, said: “Spending a few years somewhere that’s scaling fast will give you more varied experiences than many people get in a decade at a large company.”  2017 at Sift was exactly like that, and very happy and proud of the team and their ability to execute on the vision while maintaining a strong culture and employee happiness.  2018 is looking to be even more ambitious, challenging, but ultimately rewarding.  I look forward to taking the next step through this inflection point of growth at Sift – with a great foundation built in 2017, with great team members here at Sift.


Toshinari Kureha
Head of Engineering
Sift Science