Would you like to inspect the original subtitles? These are the user uploaded subtitles that are being translated:
1
00:00:01,370 --> 00:00:04,250
In this lesson, we're going
to look at data migration
2
00:00:04,250 --> 00:00:06,920
using Data Pump Export.
3
00:00:06,920 --> 00:00:10,910
So for years, Oracle DBA has
used an older tool in order
4
00:00:10,910 --> 00:00:12,520
to export data.
5
00:00:12,520 --> 00:00:14,380
And it was called
EXP for export.
6
00:00:14,380 --> 00:00:16,640
And that's a command
line utility.
7
00:00:16,640 --> 00:00:20,720
And it was used to pull
data out of a table
8
00:00:20,720 --> 00:00:23,270
and dump it to what's
called a dump file.
9
00:00:23,270 --> 00:00:26,150
And Oracle would store
that data in the dump file.
10
00:00:26,150 --> 00:00:29,090
And then the dump file could
be transported somewhere else
11
00:00:29,090 --> 00:00:33,350
and imported using
its pair tool, IMP.
12
00:00:33,350 --> 00:00:35,480
Starting in 10g,
Oracle recommends
13
00:00:35,480 --> 00:00:38,180
the use of Data Pump Export.
14
00:00:38,180 --> 00:00:41,510
So Data Pump, both the export
and the import utilities
15
00:00:41,510 --> 00:00:44,870
are referred to as Data
Pump, came out in 10g.
16
00:00:44,870 --> 00:00:47,450
And when Oracle
released 10g, they
17
00:00:47,450 --> 00:00:51,050
had some pretty big
boasts about Data Pump.
18
00:00:51,050 --> 00:00:54,360
There were a lot of complaints
over the years about EXP
19
00:00:54,360 --> 00:00:57,110
and IMP, and there was a
number of problems that
20
00:00:57,110 --> 00:00:58,910
could occur when you ran them.
21
00:00:58,910 --> 00:01:02,000
But mostly the complaint was
that they hadn't been updated
22
00:01:02,000 --> 00:01:04,130
significantly in a long time.
23
00:01:04,130 --> 00:01:06,770
Well, Data Pump
represented that update.
24
00:01:06,770 --> 00:01:10,340
And so they are much different,
and yet similar tools,
25
00:01:10,340 --> 00:01:13,250
but they can have a lot
of benefits with them.
26
00:01:13,250 --> 00:01:16,730
And Oracle indicated
that Data Pump Export
27
00:01:16,730 --> 00:01:20,720
can be up to 40 times faster
than the old export tool.
28
00:01:20,720 --> 00:01:23,780
And from my own testing
and experiments,
29
00:01:23,780 --> 00:01:25,790
I can say that that's
absolutely the case.
30
00:01:25,790 --> 00:01:30,770
I mean, it is that much faster
in certain circumstances.
31
00:01:30,770 --> 00:01:33,100
So both Data Pump
Export and Import
32
00:01:33,100 --> 00:01:35,950
have a number of special
features to them.
33
00:01:35,950 --> 00:01:38,380
So the first would
be compression.
34
00:01:38,380 --> 00:01:40,510
So the output files
that are dumped out,
35
00:01:40,510 --> 00:01:43,480
those dump files, can be
compressed on the fly,
36
00:01:43,480 --> 00:01:45,430
to make them much
smaller and use up
37
00:01:45,430 --> 00:01:47,210
significantly less space.
38
00:01:47,210 --> 00:01:49,460
We also have encryption
available to us.
39
00:01:49,460 --> 00:01:50,830
This is very
important for things
40
00:01:50,830 --> 00:01:52,810
like when you're
dumping data out
41
00:01:52,810 --> 00:01:55,810
of production to bring
into a development or test
42
00:01:55,810 --> 00:01:58,660
environment, is that
encryption that's there.
43
00:01:58,660 --> 00:02:03,190
We can export via network
link, actually a database link,
44
00:02:03,190 --> 00:02:07,030
so we can actually export or
import data over a network.
45
00:02:07,030 --> 00:02:11,140
So we save on the space that has
to be stored using those dump
46
00:02:11,140 --> 00:02:13,480
files when we dump
those dump files out,
47
00:02:13,480 --> 00:02:16,210
all the space that
must be used for that.
48
00:02:16,210 --> 00:02:18,270
None of that is necessary
over the network,
49
00:02:18,270 --> 00:02:20,620
as we pull the data directly.
50
00:02:20,620 --> 00:02:23,750
We can use parallelism
with Export and Import.
51
00:02:23,750 --> 00:02:25,390
And parallelism
allows us to have
52
00:02:25,390 --> 00:02:27,700
a number of parallel
processes doing
53
00:02:27,700 --> 00:02:29,280
the work at the same time.
54
00:02:29,280 --> 00:02:31,450
And that's where quite a
bit of the performance gain
55
00:02:31,450 --> 00:02:34,270
can be achieved, using
that parallelism.
56
00:02:34,270 --> 00:02:37,270
We can export data
using a subquery.
57
00:02:37,270 --> 00:02:39,850
So if we want to export
data from a table
58
00:02:39,850 --> 00:02:43,090
but not all of the data,
we can use a subquery.
59
00:02:43,090 --> 00:02:45,160
And then just dump
out part of the data.
60
00:02:45,160 --> 00:02:49,630
And then we can use Data Pump
to rename tables or schemas,
61
00:02:49,630 --> 00:02:53,050
or map them to new table spaces,
a number of different features
62
00:02:53,050 --> 00:02:54,770
there as well.
63
00:02:54,770 --> 00:02:59,940
So I'm here in the E App
Sreis Loader directory.
64
00:02:59,940 --> 00:03:01,910
And what we want to
do is, we're going
65
00:03:01,910 --> 00:03:06,020
to use Export to dump
data out into a dump file.
66
00:03:06,020 --> 00:03:08,060
And so we're going to
start from scratch.
67
00:03:08,060 --> 00:03:10,220
And when we do this,
we usually create
68
00:03:10,220 --> 00:03:13,070
what's called a PAR
file, or Parameter file.
69
00:03:13,070 --> 00:03:14,990
And it's going to have
a list of commands
70
00:03:14,990 --> 00:03:18,290
that we want Data Pump
to use that directs
71
00:03:18,290 --> 00:03:20,630
how we export our data.
72
00:03:20,630 --> 00:03:24,460
I'm just going to create a text
file here, call it Deptexp.par.
73
00:03:29,540 --> 00:03:31,560
And so let's put a number
of commands in here
74
00:03:31,560 --> 00:03:36,100
that will direct data pump
export and how to export data.
75
00:03:36,100 --> 00:03:38,730
First of all, we
give it a user ID.
76
00:03:38,730 --> 00:03:42,050
We give it a directory object.
77
00:03:42,050 --> 00:03:44,190
We give it a dump
file specifications.
78
00:03:44,190 --> 00:03:47,660
This is what we want our
dump file to be named.
79
00:03:47,660 --> 00:03:49,910
And this Percent U
is very important.
80
00:03:49,910 --> 00:03:53,480
Because a Percent U is
just a keyword if you will
81
00:03:53,480 --> 00:03:56,600
for allowing Data
Pump to give our dump
82
00:03:56,600 --> 00:04:00,080
file a sequential number
in the name of it.
83
00:04:00,080 --> 00:04:03,860
So if we were to export
and a number of files
84
00:04:03,860 --> 00:04:06,980
were generated from the dump
file, then each one of them
85
00:04:06,980 --> 00:04:09,350
would be sequentially
numbered so they're not
86
00:04:09,350 --> 00:04:10,620
overwriting each other.
87
00:04:10,620 --> 00:04:14,210
So it's a good practice to put
Percent U into your dump file
88
00:04:14,210 --> 00:04:15,200
specification.
89
00:04:15,200 --> 00:04:18,890
We'll specify a log file.
90
00:04:18,890 --> 00:04:20,930
We'll give it a file size.
91
00:04:20,930 --> 00:04:24,020
So this is the maximum
size of the dump file.
92
00:04:24,020 --> 00:04:27,440
If it's any greater than that
then it'll create another one.
93
00:04:27,440 --> 00:04:29,300
And since we have
Percent U, it'll
94
00:04:29,300 --> 00:04:32,180
be numbered with the
next sequential number.
95
00:04:32,180 --> 00:04:35,440
And then the table
that we want to export.
96
00:04:35,440 --> 00:04:38,190
The tables equal Dept.
97
00:04:38,190 --> 00:04:40,660
Save this.
98
00:04:40,660 --> 00:04:44,680
And so we have our PAR file,
our Parameter file here.
99
00:04:44,680 --> 00:04:48,130
And so in order to run
this, we'll use EXPDP.
100
00:04:48,130 --> 00:04:52,240
So that's the command line
tool for Data Pump Export.
101
00:04:52,240 --> 00:04:56,120
Our file, equal, Dept exp.par.
102
00:05:00,470 --> 00:05:02,490
All right, so let's
look at our output here.
103
00:05:02,490 --> 00:05:04,400
We invoke it.
104
00:05:04,400 --> 00:05:07,160
Then it comes in here,
says starting Scott.
105
00:05:07,160 --> 00:05:09,560
And what it does is, it
creates a system level table
106
00:05:09,560 --> 00:05:12,050
with all the information
or the metadata
107
00:05:12,050 --> 00:05:14,150
that it needs to process this.
108
00:05:14,150 --> 00:05:16,740
And then it gives an estimation.
109
00:05:16,740 --> 00:05:19,100
And then tells us here
that it successfully
110
00:05:19,100 --> 00:05:25,120
completed, and the table, and
the number of rows that it did.
111
00:05:25,120 --> 00:05:28,890
So it exports that
data out for us.
112
00:05:28,890 --> 00:05:33,080
Now That's not the only type
of export that we can do.
113
00:05:33,080 --> 00:05:34,790
Let's open this
up and modify this
114
00:05:34,790 --> 00:05:37,250
to do a schema level export.
115
00:05:37,250 --> 00:05:40,190
So we've done a table
level export here,
116
00:05:40,190 --> 00:05:42,650
where we exported
the Dept table.
117
00:05:42,650 --> 00:05:45,890
What if we wanted to export
all of the Scott schema?
118
00:05:45,890 --> 00:05:47,640
Well, we can do that as well.
119
00:05:47,640 --> 00:05:50,180
So we'll keep the
same user name and ID.
120
00:05:50,180 --> 00:05:53,650
The directory object
will remain the same.
121
00:05:53,650 --> 00:05:58,040
Here, for the log file, we'll
say Scott as the export.
122
00:05:58,040 --> 00:06:03,250
And the dump file
will be Scott as well.
123
00:06:03,250 --> 00:06:05,040
We're going to
make the file size
124
00:06:05,040 --> 00:06:11,380
70K, kind of a demonstration of
how this Percent U will work.
125
00:06:11,380 --> 00:06:15,550
And then schemas equal Scott.
126
00:06:15,550 --> 00:06:17,910
And we'll Save and Close it.
127
00:06:17,910 --> 00:06:20,490
And this time we'll
do the same command,
128
00:06:20,490 --> 00:06:23,830
only we'll use the
Scott Par file.
129
00:06:23,830 --> 00:06:33,970
Make a copy of that and
call it Scott EXP.par.
130
00:06:33,970 --> 00:06:35,960
And execute.
131
00:06:40,600 --> 00:06:42,960
So we notice that there's
quite a bit more output here,
132
00:06:42,960 --> 00:06:46,150
because we've directed it
to export the entire schema.
133
00:06:46,150 --> 00:06:49,440
And not only is that going
to include these tables,
134
00:06:49,440 --> 00:06:53,190
but it's also going to include
any sequences, indexes,
135
00:06:53,190 --> 00:06:56,430
constraints, those types
of things that we see.
136
00:06:56,430 --> 00:06:59,610
Also notice that it wrote
multiple dump files, each one
137
00:06:59,610 --> 00:07:01,700
given the sequential number.
138
00:07:01,700 --> 00:07:05,200
And if we were to look out
on our loader directory,
139
00:07:05,200 --> 00:07:08,070
we can see that those dump
files have been dumped out
140
00:07:08,070 --> 00:07:10,800
as eight different files,
each with a sequential number.
141
00:07:10,800 --> 00:07:13,950
And that keeps us from
overriding the same file
142
00:07:13,950 --> 00:07:16,980
or reaching some maximum
size for the file.
143
00:07:16,980 --> 00:07:21,540
So that's how we can use Data
Pump Export to export data.
11673
Can't find what you're looking for?
Get subtitles in any language from opensubtitles.com, and translate them here.