Big data is a voluminous dataset from a variety of sources and of varying types that is continuously generated at a high velocity. It can only be managed by specialized processes different from the traditional relational database processing tools in order to infer any relevance or insight from it and to establish the veracity of the data. In other words, big data is a huge collection of data from different sources and of varying types that are growing at an exponential rate with time with no traditional means of storing or processing it but are valuable for insight. The description of big data above stems from the historical use of the term up to the recent definition of parameters (i.e. V's) that are being used to capture the various aspects of this sort of dataset. My history consideration for my description starts from various uses of the term between 1997 and 1998 which are similar to the current usage to mean information explosion starting with Michael Cox and David Ellsworth
Depending on your persistence provider, you may encounter an SQLIntegrityConstraintViolationException when you try to persist an object that violates a database constraint. To handle this exception, you need to find out the class that wraps it by calling the e.getClass() and e.getCause() methods on the exception object. For example, if you are using Hibernate as your persistence provider, you can catch a javax.ejb.EJBException and then get the org.hibernate.exception.ConstraintViolationException from its cause. try{ //persistence transactions }catch (Exception e) { if (e instanceof javax.ejb.EJBException) { logger.debug("Exception instance of javax.ejb.EJBException"); Throwable cause = e.getCause(); //persistence exception if (cause != null) { cause = cause.getCause(); if (cause instanceof org.hibernate.exception.ConstraintViolationException) { logger.info("update_duplicate_response | " + update_