JIT hotspot compiler

The Java JIT compiler compiles Java bytecode to native executable code during the runtime of your program. This increases the runtime of your program significantly. The JIT compiler uses runtime information to identify part in your application which are runtime intensive. These so-called “hot spots” are then translated native code. This is the reason why the JIT compiler is also called “Hot-spot” compiler.

JIT is store the original bytecode and the native code in memory because JIT can also decide that a certain compilation steps must be revised.

Java volatile again

http://en.wikipedia.org/wiki/Volatile_variable

The Java programming language also has the volatile keyword, but it is used for a somewhat different purpose. When applied to a field, the Java volatile guarantees that:

    (In all versions of Java) There is a global ordering on the reads and writes to a volatile variable. This implies that every thread accessing a volatile field will read its current value before continuing, instead of (potentially) using a cached value. (However, there is no guarantee about the relative ordering of volatile reads and writes with regular reads and writes, meaning that it's generally not a useful threading construct.)
    (In Java 5 or later) Volatile reads and writes establish a happens-before relationship, much like acquiring and releasing a mutex.[9]

Using volatile may be faster than a lock, but it will not work in some situations.[citation needed] The range of situations in which volatile is effective was expanded in Java 5; in particular, double-checked locking now works correctly.[10]

http://en.wikipedia.org/wiki/Happened-before

In computer science, the happened-before relation (denoted: \to \;) is a relation between the result of two events, such that if one event should happen before another event, the result must reflect that.

Fail fast vs fail safe

fail fast, throw exception if any concurrent modification.

fail safe, work on seperate/clean copy for modification, dont throw exception.

 java.util.concurrent.CopyOnWriteArrayList<E>

A thread-safe variant of java.util.ArrayList in which all mutative operations (add, set, and so on) are implemented by making a fresh copy of the underlying array. 

This is ordinarily too costly, but may be more efficient than alternatives when traversal operations vastly outnumber mutations, and is useful when you cannot or don't want to synchronize traversals, yet need to preclude interference among concurrent threads. The "snapshot" style iterator method uses a reference to the state of the array at the point that the iterator was created. This array never changes during the lifetime of the iterator, so interference is impossible and the iterator is guaranteed not to throw ConcurrentModificationException. The iterator will not reflect additions, removals, or changes to the list since the iterator was created. Element-changing operations on iterators themselves (remove, set, and add) are not supported. These methods throw UnsupportedOperationException. 

All elements are permitted, including null. 

Memory consistency effects: As with other concurrent collections, actions in a thread prior to placing an object into a CopyOnWriteArrayList happen-before actions subsequent to the access or removal of that element from the CopyOnWriteArrayList in another thread. 

This class is a member of the Java Collections Framework.

Fail-Safe vs Fail-Fast Iterator in Java Collections
Fail-Safe Iterator (java.util.concurrent – ConcurrentSkipListSet, CopyOnWriteArrayList, ConcurrentMap)

Fail-safe Iterator is “Weakly Consistent” and does not throw any exception if collection is modified structurally during the iteration. Such Iterator may work on clone of collection instead of original collection – such as in CopyOnWriteArrayList. While ConcurrentHashMap’s iterator returns the state of the hashtable at some point at or since the creation of iterator. Most collections under java.util.concurrent offer fail-safe Iterators to its users and that’s by Design. Fail safe collections should be preferred while writing multi-threaded applications to avoid conurrency related issues. Fail Safe Iterator is guaranteed to list elements as they existed upon construction of Iterator, and may reflect any modifications subsequent to construction (without guarantee).

Fail-Fast Iterator (java.util package – HashMap, HashSet, TreeSet, etc)

Iterator fails as soon as it realizes that the structure of the underlying data structure has been modified since the iteration has begun. Structural changes means adding, removing any element from the collection, merely updating some value in the data structure does not count for the structural modifications. It is implemented by keeping a modification count and if iterating thread realizes the changes in modification count, it throws ConcurrentModificationException. Most collections in package java.util are fail-fast by Design.

Observor and Observable

 java.util.Observable

This class represents an observable object, or "data" in the model-view paradigm. It can be subclassed to represent an object that the application wants to have observed. 

An observable object can have one or more observers. An observer may be any object that implements interface Observer. After an observable instance changes, an application calling the Observable's notifyObservers method causes all of its observers to be notified of the change by a call to their update method. 

The order in which notifications will be delivered is unspecified. The default implementation provided in the Observable class will notify Observers in the order in which they registered interest, but subclasses may change this order, use no guaranteed order, deliver notifications on separate threads, or may guarantee that their subclass follows this order, as they choose. 

Note that this notification mechanism is has nothing to do with threads and is completely separate from the wait and notify mechanism of class Object. 

When an observable object is newly created, its set of observers is empty. Two observers are considered the same if and only if the equals method returns true for them.

java.util.Observer

A class can implement the Observer interface when it wants to be informed of changes in observable objects

JVM 7

http://docs.oracle.com/javase/7/docs/webnotes/tsg/TSG-VM/html/memleaks.html

3.1.1 Detail Message: Java heap space

The detail message Java heap space indicates that an object could not be allocated in the Java heap. This error does not necessarily imply a memory leak. The problem can be as simple as a configuration issue, where the specified heap size (or the default size, if not specified) is insufficient for the application.

In other cases, and in particular for a long-lived application, the message might be an indication that the application is unintentionally holding references to objects, and this prevents the objects from being garbage collected. This is the Java language equivalent of a memory leak. Note that APIs that are called by an application could also be unintentionally holding object references.

3.1.2 Detail Message: PermGen space

The detail message PermGen space indicates that the permanent generation is full. The permanent generation is the area of the heap where class and method objects are stored. If an application loads a very large number of classes, then the size of the permanent generation might need to be increased using the -XX:MaxPermSize option.

Interned java.lang.String objects are no longer stored in the permanent generation. The java.lang.String class maintains a pool of strings. When the intern method is invoked, the method checks the pool to see if an equal string is already in the pool. If there is, then the intern method returns it; otherwise it adds the string to the pool. In more precise terms, the java.lang.String.intern method is used to obtain the canonical representation of the string; the result is a reference to the same class instance that would be returned if that string appeared as a literal.

When this kind of error occurs, the text ClassLoader.defineClass might appear near the top of the stack trace that is printed.

aspect not working as aspectOf factory method not found

for my case, the reason being, the aspect is not waved at compile time.

it works well in Eclipse/IDE, as there is plugsin, aspectJRuntime plugin. however, when i build the war; my ant didnt call aspectjtools to weave compile the aspect.

[20-05-2014 21:47:00.957] [516102] [org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/clientinquiryweb]] [ERROR] [MSC service thread 1-10]
Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener: org.springframework.beans.fa
ctory.BeanCreationException: Error creating bean with name 'pageDeco' defined in ServletContext resource [/WEB-INF/spring/applicationContext.xml]: No matchin
g factory method found: factory method 'aspectOf()'. Check that a method with the specified name exists and that it is static.
        at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:528)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java
:1015)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:911)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:485)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456)
        at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294)
        at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225)
        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193)
        at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585)
        at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:913)
        at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:464)
        at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:384)
        at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:283)
        at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:111)
        at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:3392)
        at org.apache.catalina.core.StandardContext.start(StandardContext.java:3850)
        at org.jboss.as.web.deployment.WebDeploymentService.start(WebDeploymentService.java:90)
        at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811)
        at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:722)

Lock Overhead vs Lock Contention

Granularity

Before being introduced to lock granularity, one needs to understand three concepts about locks.

lock overhead: The extra resources for using locks, like the memory space allocated for locks, the CPU time to initialize and destroy locks, and the time for acquiring or releasing locks. The more locks a program uses, the more overhead associated with the usage.
lock contention: This occurs whenever one process or thread attempts to acquire a lock held by another process or thread. The more fine-grained the available locks, the less likely one process/thread will request a lock held by the other. (For example, locking a row rather than the entire table, or locking a cell rather than the entire row.)
deadlock: The situation when each of two tasks is waiting for a lock that the other task holds. Unless something is done, the two tasks will wait forever.

There is a tradeoff between decreasing lock overhead and decreasing lock contention when choosing the number of locks in synchronization.

An important property of a lock is its granularity. The granularity is a measure of the amount of data the lock is protecting. In general, choosing a coarse granularity (a small number of locks, each protecting a large segment of data) results in less lock overhead when a single process is accessing the protected data, but worse performance when multiple processes are running concurrently. This is because of increased lock contention. The more coarse the lock, the higher the likelihood that the lock will stop an unrelated process from proceeding. Conversely, using a fine granularity (a larger number of locks, each protecting a fairly small amount of data) increases the overhead of the locks themselves but reduces lock contention. Granular locking where each process must hold multiple locks from a common set of locks can create subtle lock dependencies. This subtlety can increase the chance that a programmer will unknowingly introduce a deadlock.[citation needed]