Large Data Volumes (LDV) and Skuid

Skuid team,

Unmesh Sheth posted a question regarding large result sets in his post Modal: max # of records limit.

His post prompted me to think about my experience working with Enterprise accounts with large objects and LDV.  Unmesh mentions 8000 records.  I frequently work with 10Ks, 100Ks and millions of records.

Here are my questions / comments:

  1. Does Skuid test / regression test using LDV?   If so, can you share your benchmarks?
  2. Do you have a set of best practices for LDV?  If not, consider this a request.
  3. Along with #1 and #2, can you share examples of your pages used for LDV testing e.g. models, visualizations, tables, etc.
  4. Do you have case studies or customer anecdotes regarding LDV?  If so, please share.
  5. LDV maybe a useful webinar / deep dive topic.

Regards,
Irvin

We are increasingly working with customers that have VLDV  (Very large data volumes).  So we are building a set of anectdotal best practices.  However, we do need to formalize these and document them in a number of venues…  Thanks for the encouragement.  Consider the challenge accepted. 

Bump.

Hi Irvin,

Quick clarifying question.  Are you talking about using Skuid in orgs that have millions of rows, but only pulling up a few records at a time, or are you talking about actually pulling in large data volumes (say 100,000 rows) directly into a Skuid model in one particular instance of a page.  Or both.

Or using aggregate models.

Hi Ben,

Pulling 100K rows into the client is probably not a good idea. :slight_smile:

I am thinking it would nice to understand what sort of VLDV testing Skuid performs, maybe share your metrics, and of course publish best practices. 

Does Skuid have customers with VLDV?  Perhaps a case study or two could be shared.

BTW I think in terms of millions of rows not just 100K rows.

Thanks,
Irvin

Ok, that makes sense.  

Skuid does have VLDV customers, but Skuid software isn’t really what deals with those issues.  For the most part, Skuid just generates SOQL queries, sends them to the Salesforce SOQL black box, and gets a response.  So in essence our VLDV best practices would be exactly the same as Salesforce’s VLDV best practices.  (Use indexed fieids when you can, don’t do contains searches on large objects, make your queries selective, etc.)  In the past few releases we’ve added some features to help VLDV customers in certain instances.  Allowing SOSL on searches, allowing you to pick which fields are used in table and reference searches, and deferring filter execution until the user clicks the “Apply” button are three of those features.

I do agree however that we should have some kind of document that describes this.  (Even if it is quite similar to the SOQL best practices)

Thanks for the response.  I ask that you keep in mind that when selling Skuid these questions will come up (certainly if the party is informed and probably been burned in the past with very large objects and governor limits).  Any guidance, talking points and anecdotal evidence is appreciated.