These isolation levels are defined in terms of three “phenomena” that are either permitted or not at a given isolation level:
• Dirty read: The meaning of this term is as bad as it sounds. You are permitted to read uncommitted, or dirty, data. You would achieve this effect by just opening an OS file that someone else is writing and reading whatever data happens to be there. Data integrity is compromised, foreign keys are violated, and unique constraints are ignored.
• Non-repeatable read: This simply means that if you read a row at time T1 and attempt to reread that row at time T2, the row may have changed, it may have disappeared, it may have been updated, and so on.
• Phantom read: This means that if you execute a query at time T1 and re-execute it at time T2, additional rows may have been added to the database, which will affect your results. This differs from the non-repeatable read in that with a phantomread, data you already read has not been changed, but rather that more data satisfies your query criteria than before.
The SQL isolation levels are defined based on whether or not they allow each of the preceding phenomena.
Isolation Level | Dirty Read | Nonrepeatable Read | Phantom Read |
Possible | Possible | Possible | |
Read committed | Not possible | Possible | Possible |
Not possible | Not possible | Possible | |
Serializable | Not possible | Not possible | Not possible |
Oracle Database provides the transaction isolation levels:
-Read Committed Isolation
-Serializable Isolation
-Read-Only Isolation
READ UNCOMMITTED
UPDATE statements have two components: a read component as defined by the WHERE clause and a write component as defined by the SET clause. UPDATE statements read and write to the database; all DML statements have this ability. The case of a single row INSERT using the VALUES clause is the only exception, as such statements have no read component, just the write component.
READ COMMITTED
In the read committed isolation level, which is the default, every query executed by a transaction sees only data committed before the query—not the transaction—began.
There are no dirty reads. There may be non-repeatable reads (i.e., rereads of the same row may return a different answer in the same transaction) and phantom reads (i.e., newly inserted and committed rows become visible to a query that were not visible earlier in the transaction). READ COMMITTED is perhaps the most commonly used isolation level in database applications everywhere, and it is the default mode for Oracle databases.
The really bad news in the read committed is that we are may make the end user wait for the wrong answer. We still receive an answer that never existed in the committed database state at any point in time, as with the dirty read, but this time we made the user wait for the wrong answer.
REPEATABLE READ
The goal of REPEATABLE READ is to provide an isolation level that gives consistent, correct answers and
prevents lost updates.
Most databases (not Oracle) achieve repeatable reads via the use of row-level shared read locks. A shared read lock prevents other sessions from modifying data that we have read. This, of course, decreases concurrency. Oracle opted for the more concurrent, multi-versioning model to provide read-consistent answers.
In Oracle, using multi-versioning, we get an answer that is consistent with respect to the point in time the query began execution.
If we have REPEATABLE READ enabled in a database that employs shared read locks (and not multi-
versioning), lost update errors can’t happen. The reason lost updates will not happen in those databases is because the simple act of selecting the data leaves a lock on it, and once read by our transaction, that data cannot be modified by any other transaction. Now, if your application assumes that REPEATABLE READ implies “lost updates can’t happen,” you are in for a painful surprise when you move your application to a database that does not use shared read locks as an underlying concurrency control mechanism.
SERIALIZABLE
In the serialization isolation level, a transaction sees only changes committed at the time the transaction—not the query—began and changes made by the transaction itself.
Oracle takes an optimistic approach to serialization—it gambles on the fact that the data your transaction wants to update won’t be updated by any other transaction. This is typically the way it happens, and usually the gamble pays off, especially in quick-transaction, OLTP-type systems. If no one else updates your data during your transaction, this isolation level, which will generally decrease concurrency in other systems, will provide the same degree of concurrency as it would without SERIALIZABLE transactions. The downside to this is that you may get the ORA-08177 error if the gamble doesn’t pay off. If you think about it, however, it’s worth the risk. If you’re using SERIALIZABLE transactions, you shouldn’t expect to update the same information as other transactions. If you do, you should use the SELECT ... FOR UPDATE.
In many other implementations, you will find this being achieved with shared read locks and their corresponding deadlocks, and blocking. In Oracle, we do not get any blocking, but we will get the ORA-08177 error if other sessions change the data we want to change as well.
READ ONLY
READ ONLY transactions are very similar to SERIALIZABLE transactions, the only difference being that they do not allow modifications, so they are not susceptible to the ORA-08177 error. READ ONLY transactions are intended to support reporting needs where the contents of the report need to be consistent with respect to a single point in time. In other systems, you would use REPEATABLE READ and suffer the associated affects of the shared read lock.
READ ONLY transactions are not trouble-free, however. Whereas you might see an ORA-08177 error in a SERIALIZABLE transaction, you expect to see an ORA-01555 snapshot too old error with READ ONLY
transactions. This will happen on a system where other people are actively modifying the information you are reading. The changes (undo) made to this information are recorded in the undo segments. But undo segments are used in a circular fashion in much the same manner as redo logs. The longer the report takes to run, the better the chance that some undo you need to reconstruct your data won’t be there anymore. The undo segment will have wrapped around, and the portion of it you need would be reused by some other transaction. At this point, you will receive the ORA-01555 error and have to start over again.
The only solution to this sticky issue is to have undo segments that are sized correctly for your system. Time and time again, I see people trying to save a few megabytes of disk space by having the smallest possible undo segments (“Why ‘waste’ space on something I don’t really need?” is the thought). The problem is that the undo segments are a key component of the way the database works, and unless they are sized correctly, you will hit this error.
Implications of Multi-version Read Consistency
A Common Data Warehousing Technique That Fails
A common data warehousing technique I’ve seen people employ goes like this:
1. They use a trigger to maintain a LAST_UPDATED column in the source table, much like the method described in the last chapter in the “Optimistic Locking” section.
2. To initially populate a data warehouse table, they remember what time it is right now by selecting out SYSDATE on the source system. For example, suppose it is exactly 9:00 a.m. right now.
3. They then pull all of the rows from the transactional system—a full SELECT * FROM TABLE—to get the data warehouse initially populated.
4. To refresh the data warehouse, they remember what time it is right now again. For example, suppose an hour has gone by—it is now 10:00 a.m. on the source system. They will remember that fact. They then pull all changed records since 9:00 a.m. (the moment before they started the first pull) and merge them in.
They believe that they now have all of the records in the data warehouse that were modified since they did the initial pull. They may actually have all of the records, but just as likely they may not. This technique does work on some other databases—ones that employ a locking system whereby reads are blocked by writes and vice versa. But in a system where you have non-blocking reads, the logic is flawed.
To see the flaw in this example, all we need to do is assume that at 9:00 a.m. there was at least one open, uncommitted transaction. At 8:59:30 a.m., it had updated a row in the table we were to copy. At 9:00 a.m., when we started pulling the data and thus reading the data in this table, we would not see the modifications to that row; we would see the last committed version of it. If it was locked when we got to it in our query, we would read around the lock. If it was committed by the time we got to it, we would still read around it since read consistency permits us to read only data that was committed in the database when our statement began. We would not read that new version of the row during the 9:00 a.m. initial pull, nor would we read the modified row during the 10:00 a.m. refresh. The reason? The 10:00 a.m. refresh would only pull records modified since 9:00 a.m. that morning, but this record was modified at 8:59:30 a.m. We would never pull this changed record. In many other databases where reads are blocked by writes and a committed but inconsistent read is implemented, this refresh process would work perfectly. If at 9:00 a.m. when we did the initial pull of data, we hit that row and it was locked, we would have blocked and waited for it, and read the
committed version. If it were not locked, we would just read whatever was there, committed. So, does this mean the preceding logic just cannot be used? No, it means that we need to get the “right now” time a little differently. We need to query V$TRANSACTION and find out which is the earliest of the current time and the time recorded in START_TIME column of this view. We will need to pull all records changed since the start time of the oldest transaction (or the current SYSDATE value if there are no active transactions):
select nvl( min(to_date(start_time,'mm/dd/rr hh24:mi:ss')),sysdate) from v$transaction;
In this example, that would be 8:59:30 a.m. when the transaction that modified the row started. When we go to refresh the data at 10:00 a.m., we pull all of the changes that had occurred since that time; when we merge these into the data warehouse, we’ll have everything we need.
Write Consistency
Consistent Reads and Current Reads
Oracle does do two types of block gets when processing a modification statement. It performs
• Consistent reads: When “finding” the rows to modify
• Current reads: When getting the block to actually update the row of interest
现用Oracle执行update的过程来说明Write Consistency:
1)根据条件作一致读。该一致读SCN为update statement开始的时间。
2)一致读中找到满足条件的记录。(this row,when we started at update-statement SCN,was interested by us )
3)然后以当前模式(current mode)读取到数据块最新的内容(row,when was interested in consistent read),然后比较一致读和当前读的数据集。
4)如果没有使用触发器,则只比较在where中出现的列 (predicate in consistent read) ;如果包括触发器,则还需要比较trigger内引用的:old 列。因为oracle是在一致读模式得到:old值,且在当前模式:old列可能会被改变了。
5) 如果predicate 发生变化,则表示受到其他事务影响,则mini-rollback. 否则正常更新。assuming you are using READ COMMITTED isolation. If you are using SERIALIZABLE isolation, then at this point you would receive an ORA-08177: can't serialize access for this transaction error.
举个例子,如果执行update t set x = 5 where y = 6;
consistent read中y=6为最后一行,则通过consistent read找到该纪录;然后以current read 读取该块;但是发现该纪录已经被其他session将"y=6" update为 "y=7"( Tx-another), 表示该session执行的update的状态不一致,导致mini-rollback.
In READ COMMITTED mode, after the transaction rolls back your update, the database will restart the update (i.e., change the point in time at which the update is “as of”), and instead of updating the data again, it will go into SELECT FOR UPDATE mode and attempt to lock all of the rows WHERE Y=6 for your session. Once it does this, it will run the UPDATE against that locked set of data, thus ensuring this time that it can complete without restarting.
因此,在batch更新之前,避免&减少mini-rollback的方法就是先select for update获得锁,再更新。
没有评论:
发表评论