Quantcast
Channel: SAP NetWeaver Administrator
Viewing all 185 articles
Browse latest View live

ASCS ENQ performance tuning

$
0
0

Hello SAP Administrators,

 

the ASCS is not really a new topic, but I guess most of you never changed ASCS parameters after the installation/migration of it. In the most cases the defaults are good enough, but for big system environments you have to optimize them.

In a good sizing/parametrization you should take care also of the ASCS parameters!

 

Basics

ASCS

- Messages Server

- Enqueue Server

 

Every new system will be installed with ASCS and the option for ERS. ERS is the Enqueue Replication Service which take place in a cluster scenario.

The future is the standalone enqueue server which is just another name for the ENQ service inside the ASCS.

There are a lot of documents and notes regarding the enqueue server and I will just collect them here with some hidden parameters and also show you my tests.

In the past integrated ENQ Server can be administrated in the CI ABAP profile via RZ10 or directly in the filesystem.

The new ASCS can only be configured via the profile on the filesystem. It is not visible anymore via RZ11/RZ10.

Don't believe what you see in RZ11! You just see the defaults or any old value which were note deleted from the profile.

It is essential that the binaries from (A)SCS and ERS instance are from the same binary set.

Please delete all old enqueue parameters from default and instance profiles!

 

Test environment:

Kernel Release 742 SP210

 

PAS with ASCS (NUC)

AIX 7.1

DB2 10.5 FP3/4

120WP

 

application server

20x linux vmware

150WP per server

=>3000WP

 

Calculate some parameters

 

1) calculation of enque/server/max_requests

workprocesses + enqueue table size in MB * 20|25 (NUC|UC)

=> 3120 + 1024 * 20 (because we have a NUC system)

=> 23600

 

2) calculation of the snapshot area

- the snapshot memory area must be greater than the enqueue table size

- The size of this memory area is the multiplication of parameters enque/snapshot_pck_size and enque/snapshot_pck_ids

- There is no restriction or recommendation about the better number of packages and its size because this depends on the business process

- 1903553 - Standalone Enqueue Server (ENSA) and snapshot packages

 

ENQ table size: 1024MB

snapshot memory=enque/snapshot_pck_ids*enque/snapshot_pck_size

=> default size = 10.000 * 50.000 = 500MB

=> just edit the parameters if your ENQ table size is above 500MB

=> the package size is :

1.000.000*80.000

80.000.000 KB

80 GB

 

=> so in this case we can definitely reduce the parameter, but our tests have shown that this settings are working pretty good

 

 

 

3) Client profile (application Server):

enque/process_location = REMOTESA

enque/serverhost = <hostname of the enqueue-server>

enque/serverinst = <instance number of the enqueue-server>

enque/deque_wait_answer = TRUE

 

 

4) ASCS instance profile:

#If you set this parameter to 0, the lock table is not placed in any pool

ipc/shm_psize_34 = 0

 

#default 50KB => parameter value 50.000 (30.000 and 100.000 possible values)- determines the size of

#the individual packets. A lock entry requires about 1 KB.

enque/snapshot_pck_size = 80000

 

#enqueue table size

enque/table_size = 1024000

 

#default: 10.000 (10-1.000.000 possible) => newer releases default: 1600 - determines the

#maximum number of snapshot packets

enque/snapshot_pck_ids = 1000000

 

# hidden parameter, see note 1850053 - ENSA suspended the ERS network connection

enque/ni_queue_size = 1000

 

#default: 1000 - parameter determines the number of processes that can be connected to the

#enqueue server. Set the parameter to the same value as the total number of work processes in the system

enque/server/max_clients = 5000

 

#new name for 'max_query_requests' is 'enque/server/query_block_count' since release 800

enque/server/max_query_requests = 5000

 

# default 1000 - maximum number of enqueue requests that can be processed simultaneously

enque/server/max_requests = 23600

 

#Max. number of subsequent asynchronous requests - a synchronous request is forced

#every n asynchronous request. You specify this number n in this parameter.

enque/async_req_max = 5000

 

#The number of I/O threads - a value higher than 4 has never resulted in an increase in throughput.

enque/server/threadcount = 8

 

#parameter specifies how many memory blocks (each has 32 KB) are reserved in the

#replication server for transferring the data

enque/server/query_block_count = 5000

 

#ENQ server name

rdisp/enqname = $(rdisp/myname)

 

#mechanism for communication between the threads - with value true, the communication

#is quicker but generates a heavy load on the system

enque/server/use_spinning=false

undocumented parameters which have to be tested by your own:

enque/server/req_block_size = 13333

enque/enrep/req_block_count = 14000

 

We have tested a lot with the threads and the snapshot size. That settings which you see above are the final setup. In the past we have a lot of ENQ time (up to 30%) in our mass processing batch. We could reduce this to about 1-2%.

This reduced the overall time for the massive parallism for about 40-60%!!! Nobody expected such a big benefit for this processes. But it depends on your application and the current ENQ time if the improvements can also take place in your environment.

 

Please analyze your ASCS and Application Server profile for this parameters. If you see a lot of ENQ time while analyzing with TX STAD or SE30, you should check the performance of your ENQ server.

This can be done with SM12 (OK Code: test/dudel). Please use the following note how to do this: 1320810 - Z_ENQUEUE_PERF

 

Here are an example of a small test system without application server (I will add some screenshots from the big environment in some weeks):

=> you can see that most of the requests are <= 1ms

=> more intresting is a an environment with more application servers in cause of the RTT (network round trip times)

 

Another indicator could be the report SAPLSENA. If you see in TX ST03 high times on this report you should take care of it. Also when you see the report a long time in SM66/SM50 this is an indicator of bad ENQ performance.

 

 

A good starting point for your analyze could be this blog on sap-perf.ca

 

and than => happy tuning

 

Details:

920979 - Out of memory im Standalone-Enqueue-Server

1850053 - ENSA suspended the ERS network connection

1903553 - Standalone Enqueue Server (ENSA) and snapshot packages

654744 - Several errors corrected in the standalone enqueue server

sap.help ENQ Server

 

If you have any further questions, don't hestate to comment the blog or contact me or one of my colleagues at Q-Partners ( [info_at_qpcm_dot_de | mailto:info@qpcm.de] )

 

Best Regards,

Jens Gleichmann

 

Technology Consultant at Q-Partners (www.qpcm.eu )

 

Edit History:

#V1.1 Added example of 1320810 - Z_ENQUEUE_PERF


What you can check when you face an issue during Configuration Wizard execution

$
0
0

Have you ever face any issue when you executed the Configuration Wizard to setup a PI, or maybe running the post system copy steps? When you face an issue during the execution of this wizard, do you know what are the first checks you must do to solve the issue?

 

My suggestion in always make sure you are using the latest patch level of the components according to the SAP KBA below:

1749574 - Checking for the latest Configuration Wizard patch

 

If your system is based on SAP NetWeaver 7.0x, you must apply the latest version of LMTOOLS component.

If your system is based on SAP NetWeaver 7.1 or higher, you must apply the latest version of LMCTC and LMCFG components.

 

Additionally, if your system is a PI, apply the latest patches of components listed in section "Reason and Prerequisites > Process Integration" of SAP note:

1309239 - Configuration Wizard: PI NetWeaver initial setup

 

As you can see, the following components are relevant for PI systems:

MESSAGING - MESSAGING SYSTEM SERVICE

SAPXIAF - XI ADAPTER FRAMEWORK

SAPXITOOL - XI TOOLS

SAPXIGUI - PI GUI (only valid for NW 7.11)

SAPXIGUILIB - PI GUILIB (since NW 7.30)

SAPXIESR - ESR

 

If you are not in the latest patch level of the components above (patch level is different than support package level), you should apply them.

Why? Because there are several cases in which the issue you could be facing is related to a bug that was already solved in the fresh patch level, therefore, you should always guarantee the components above are in the latest patch level, according to the support package of your system.


** Never apply a different support package in only one component of your system. Support Package can only be applied in the entire system **

 


Where could you find the software component list, to check the component's version?

You can do that by going to page: http://<host>:5<num>00/sap/monitoring/ComponentInfo

 


If the issue remains, what can I do?

If the issue still remains, even after applying the latest patch level versions of the components above mentioned, then it is suggest you to analyze the log files from CTC protocol, You can download it in the Configuration Wizard:

  1. Choose History of Executed Configuration Tasks;
  2. Select the configuration task;
  3. Click on Download Log button.

 

As soon as you get the log files, open the index.html, which lists all the steps from Configuration Wizard. You can find further details about the error in the link from the steps that failed. Further details can also be found inside defaultTrace file, into directory /usr/sap/<SID>/<InstanceID>/j2ee/cluster/serverX/log/.

How to solve NIECONN_REFUSED issues during sapcontrol execution

$
0
0

When you are executing Software Provisioning Manager tool or if you manually run sapcontrol command, you can face an issue like below:

 

Execution of the command "/usr/sap/<SID>/SYS/exe/sapcontrol -nr 00 -host abcde -function GetInstanceProperties" finished with return code 1. Output:

 

<date> <time>

GetInstanceProperties

FAIL: NIECONN_REFUSED (Connection timed out), NiRawConnect failed in plugin_fopen()

 

 

Usually, this issue happens because sapstartsrv is not properly running. Below, you can see some steps to follow in order to solve this issue:

 

  1. Make sure /usr/sap/sapservices does exist on your system. If not, please follow as described in SAP note below:
    823941  - SAP start service on Unix platforms

  2. Run command ps -ef | grep sapstartsrv and check if the the service is running for the instance. sapstartsrv must be pointing to the start or instance profile (according to the profile configuration of your system);

  3. If it is running, then manually kill the process;

  4. Make sure the sapstartsrv is properly working by running the command sapstartsrv -has <sid>adm from kernel directory. If it returns all options from help, it means sapstartsrv command is working. Otherwise, it means it has issues and you must patch your kernel.

  5. If sapstartsrv is properly working, manually run the command above as <sid>adm from kernel directory:
    ./sapstartsrv pf=/sapmnt/<SID>/profile/<start or instance profile> -D

  6. After that, run the command below and check if the issue still persists:
    /usr/sap/<SID>/<instance>/exe/sapcontrol -prot NI_HTTP -nr <inst number> -function GetInstanceProperties

  7. Does the NIECONN_REFUSED error remains? If so, then make sure all hosts are properly set in the /etc/host file.

  8. If no error returns, it means the issue is not occurring any more and you will be able to continue the installer.

Auto-Tune Your Buffers With Kernel 742

$
0
0

A couple of decades ago, a fellow by the name of Andy Hildebrand was a geophysicist working in the field of oil exploration. He developed a digital signal processing algorithm to interpret sound waves that could help locate oil deposits. It didn’t take him long to realize this algorithm could be applied to the human voice as well, pitch-shifting to correct for off-key singing. The result, in 1998, was Cher’s hit Believe, and the wildly successful software Auto-Tune. Today you would be hard pressed to find a pop song that doesn’t use Auto-Tune, whether to intentionally distort or to subtly correct. Auto-Tune has become a critical part of the toolkit in any recording studio, and is even being built in to electric guitars for realtime perfect pitch.

 

("Autotuneevo6" by Source. Licensed under Fair use via Wikipedia.)

 

Just like a pop star or audio engineer, the ability to autotune is now part of the toolkit for the Basis administrator, too. No, I don’t mean that a bunch of SAP geeks are forming a rock band (although TechEd attendees might get to see some jamming on stage), but rather you can now use the power of algorithms to set many of your memory and buffer parameters automatically.

 

 

Zero Administration Memory Management

Automatically adjustable memory parameters have been around, at least for Windows-based ABAP systems, for quite a while now. There’s nothing really new there. Since at least R/3 4.6b the basic parameters for controlling work process memory have either been automated, or defaulted high enough that, given the typical amounts of physical memory in servers of the day, the limits would not be reached. Parameters that Unix admins tweaked regularly were, for Windows admins, practically invisible.

 

But the term Zero Administration Memory Management was still a bit of a misnomer, as the system administrator still needed to monitor and manually adjust quite a few parameters to control the nametab buffer, the program buffer, the generic and single record table key buffers, among others. Various Notes detailed recommended starting values, and then over a period of days, weeks, or months, you would adjust to keep hit ratios high and swap counts low.

 

In other words, there was still a fair amount of manual tuning to get the buffers just right, although as the cost of physical memory came down and modern servers were built to accommodate much more of it, one could argue that you just throw more memory in the system and make all the buffers really large and be done with it.

 

This approach lacks a certain amount of elegance and finesse, though, and could still leave you open to provisioning too much memory for one buffer when it could have been used more effectively elsewhere.

 

Continued Evolution

Enter NetWeaver ABAP 7.4, and more specifically 7.4 SR2 and its attendant kernel version, 742.

 

Each new kernel release has introduced new parameters, or modified the defaults in older ones, and with each release the recommendations from SAP for initial parameter settings have changed slightly.

 

The 740, 741, and 742 kernels, however, have introduced a new level of automation for a number of the parameters Basis admins have become used to configuring, through the use of a cascading series of formulas. Each parameter derives its value from another parameter, and all of the automated parameters ultimately find their origin from the well-known and easily configurable PHYS_MEMSIZE.

 

You can easily look up these formulas, either in Note 2085980 (New features in memory management as of Kernel Release 7.40), or in your own 7.4 system with transaction code RZ11 or report RSPARAM. Here, however, I’ll try to interpret the formulas in plain English and show how each value derives from the previous parameter.

 

The Parameters

PHYS_MEMSIZE

The venerable PHYS_MEMSIZE, which defines how much of main memory, in megabytes, should be made available to the SAP instance, has not changed. It still defaults to the amount of physical RAM installed in the server, and in the case of a dedicated application server should generally be left unset. Doing so means that most of the other parameters will automatically adjust themselves whenever you add additional memory to the server, without changing a single setting.

 

If your server runs both a database and application instance (in old terminology, a Central Server), then you should still follow the old advice about setting PHYS_MEMSIZE to restrict how much memory SAP will use so that enough is reserved for the database. Likewise if you run two or more SAP instances on the same host: you will set this parameter for each instance to appropriately divide up the memory between them. However, the beauty here is that, in most cases, this is the only parameter you need to adjust.

 

Extended Memory Pool

The Extended Memory Pool, or how much memory will actually be used by the system, is a dynamically variable  amount during system operation, starting with the value of em/initial_size_MB and then increasing automatically when system demand requires it. In prior releases, the initial size of this pool was equal to PHYS_MEMSIZE, but now it starts at 70% of that amount, with a maximum size of 500 GB. This parameter, em/initial_size_MB, then becomes the starting point for most of the buffer size calculations.

 

Export/Import Buffer

The exception is the Export/Import Buffer, which is controlled by rsdb/obj/buffersize and rsdb/obj/max_objects. The size of this buffer is set to 1% of PHYS_MEMSIZE, with a minimum of 4 MB, and the max_objects value is set to 25% of the buffersize, with a minimum of 2000 objects.

 

Program Buffer

The Program Buffer, controlled via abap/buffersize, and often the bane of many a Basis admin’s existence in years gone by, is typically the largest buffer in the system. Previously defaulted to about 300 MB, it now defaults to 15% of the initial extended memory pool (em/initial_size_MB), rounded up to the nearest multiple of 4 MB.

 

Table Buffer (previously Generic Key Buffer)

Controlled via zcsa/table_buffer_area and zcsa/db_max_buftab, this buffer previously defaulted to about 30 MB. Now it defaults to 10% of the initial extended memory pool (em/initial_size_MB), but with a minimum of ~30 MB and a maximum of ~3 GB. The number of objects that can be stored in the buffer (zcsa/db_max_buftab) previously defaulted to 5000, but now is a function of the buffer size divided by 5120, and with a minimum of 20,000 objects.

 

Single Records Buffer

This buffer doesn’t show up in ST02 anymore, but it still exists. It’s size, defined by rtbb/buffer_length, used to be about 10 MB but is now 10% of the Table Buffer. Likewise, the number of object entries, defined by rtbb/max_tables, used to be 500 but is now 10% of the number of entries allowed in the Table Buffer.

 

Nametab Buffers

This is a collection of four related buffers.

 

Table Definition Buffer

The number of table definitions (i.e., which fields make up the table, etc) that can be buffered is controlled by rsdb/ntab/entrycount. Previously set at 200,000, it now takes its value from zcsa/db_max_buftab, i.e. the same number of table definitions can be buffered as the number of table generic keys. This parameter, the number of entries in the Table Definition Buffer, then goes on to also define the number of entries allowed in the other three Nametab buffers, as well as form the basis for the size of those buffers.

 

Field Definition Buffer

This is controlled via rsdb/ntab/ftabsize, and is often the second largest buffer in the system (after the Program Buffer). Previously defaulted to ~250 MB, it now defaults to 1 KB per allowed object (i.e., rsdb/ntab/entrycount), to a maximum of ~500 MB.

 

Short NTAB Buffer

This buffer contains a derived summary of table and field definitions and is controlled via rsdb/ntab/sntabsize. Previously defaulted to ~15 MB, now it is 10% of the Field Definition Buffer size.

 

Initial Record Layouts Buffer

This buffer contains the record layout based upon the field types and is controlled via rsdb/ntab/irbdsize. Previously defaulted to ~30 MB, now it is 20% of the Field Definition Buffer size.

 

Other Parameters

Not every buffer parameter has such a helpful formula, so there are still some buffers you must manually tune yourself, if needed. These have not changed their defaults from kernel 722.

 

CUA Buffer

rsdb/cua/buffersize (~3 MB) and rsdb/cua/max_objects (25% of rsdb/cua/buffersize, minimum of 2000; this is a new parameter).

 

Screen Buffer

zcsa/presentation_buffer_area (~4 MB) and sap/bufdir_entries (2000).

 

Calendar Buffer

zcsa/calendar_area (~500 KB) and zcsa/calendar_ids (200).

 

Online Text Repository (OTR) Buffer

rsdb/otr/buffersize_kb (~4 MB) and rsdb/otr/max_objects (2000).

 

Export/Import Shared Memory (ESM) Buffer

rsdb/esm/buffersize_kb (~4 MB) and rsdb/esm/max_objects (2000).

 

Comparing NetWeaver 7.4 and 7.01 Systems

Let’s compare how these new defaults work in two identical systems, each with 64 GB of physical RAM.

 

Parameter

Units

NetWeaver 7.4

NetWeaver 7.01

PHYS_MEMSIZE

MB

65,536

65,536

em/initial_size_MB

MB

45,875

65,536

rsdb/obj/buffersize

KB

671,089

8,192

rsdb/obj/max_objects

 

167,772

2,000

abap/buffersize

KB

7,049,216

300,000

zcsa/table_buffer_area

B

3,333,333,333

30,000,000

zcsa/db_max_buftab

 

651,042

5,000

rtbb/buffer_length

KB

325,521

10,000

rtbb/max_tables

 

65,104

500

rsdb/ntab/entrycount

 

651,042

200,000

rsdb/ntab/ftabsize

KB

500,000

250,000

rsdb/ntab/sntabsize

KB

50,000

15,000

rsdb/ntab/irbdsize

KB

100,000

30,000

 

Obviously, the new defaults result in some significantly increased initial values!

 

Let’s look at the same comparison, but this time between two NetWeaver 7.4 systems, one with the same 64 GB of RAM, and one with 8 GB of RAM (to see the effect of minimums and maximums in the ranges).

 

Parameter

Units

64 GB RAM

8 GB RAM

PHYS_MEMSIZE

MB

65,536

8,192

em/initial_size_MB

MB

45,875

5,734

rsdb/obj/buffersize

KB

671,089

83,886

rsdb/obj/max_objects

 

167,772

20,972

abap/buffersize

KB

7,049,216

884,736

zcsa/table_buffer_area

B

3,333,333,333

601,295,421

zcsa/db_max_buftab

 

651,042

117,441

rtbb/buffer_length

KB

325,521

58,720

rtbb/max_tables

 

65,104

11,744

rsdb/ntab/entrycount

 

651,042

117,441

rsdb/ntab/ftabsize

KB

500,000

117,441

rsdb/ntab/sntabsize

KB

50,000

11,744

rsdb/ntab/irbdsize

KB

100,000

23,488

 

Tuning

Does this mean you never again have to tune these parameters? Not at all. However, it means you are likely to start from a more reasonable base value, minimizing the chance of your system performing poorly on day one due to insufficient buffer space. You still must watch the buffers over time, and you can still adjust these parameters manually. Be aware, however, that when you adjust one that is “upstream” it may have ripple effects “downstream.”

 

If you find a large number of parameters need increasing, consider increasing the “upstream” parameter(s) and letting the effect cascade. If you have manually set PHYS_MEMSIZE to something other than (less than) your physical RAM, consider adjusting just this one parameter first.

 

More Information

Note 2085980: New features in memory management as of Kernel Release 7.40

Note 88416: Zero administration memory management for the ABAP server

My landscape will be complex, what do I do?

$
0
0

Do you have an Enterprise Portal system that is in front of an SRM system?

Or maybe in front of an ECC?

What about in front of both?!?!

 

If that was not enough, you still need to configure a single entry point using the SAP Web Dispatcher.

 

How do you configure all these systems to work together?

 

Fear not! Read the Web Dispatcher for Multiple Systems - Understanding and Examples - Application Server Infrastructure - SCN Wiki page and start putting all together!

 

Feedback on the WIKI is most welcome! You can leave a comment at the WIKI itself, or at this blog post.

 

Cheers,

Isaías

 

Related spaces:

 

Edit on Sep/14/2015: fixed the link to the WIKI .

Z abap report for Basis daily check

$
0
0

Hello,

 

 

I want to share a Z Report that maybe could help some Basis Administrators who need to make a daily checklist on the system.

 

 

First of all, I know there is specific SAP ways to do that, like SSAA, CCMS, Solution Manager, and so on.

 

 

But in our case, and maybe other Basis analysts cases, we don´t have a Solution Manager configurated in all customers, and we need to check every landscape status through a checklist, opening some transactions one by one. that takes some time, time that can be usefull in something else.

 

 

So, basically, this report prints some of the principle Basis transactions (sm66, sm51, sm37, tablespaces status, st22, dbachecks) in only one page.

In my case, at least, saving some time everyday.

 

 

This Report is only usefull with Oracle Databases.

 

zchecklist1.jpg

zchecklist2.JPG

zchecklist3.JPG

Regards,

Richard W. L. Brehmer

Free course: SAP NetWeaver Upgrades in a Nutshell

$
0
0

Are you a SAP NetWeaver system administrator? Or just interested in learning more about how you should manage updates to SAP NetWeaver? SAP NetWeaver Upgrades in a Nutshell is coming to openSAP this November, where you can learn how to easily manage upgrades to your system.

SAP NetWeaver is an open technology platform that allows your business to integrate both SAP and non-SAP applications. SAP NetWeaver offers a comprehensive set of technologies for running mission-critical business applications and integrating people, processes, and information.  It also serves as the technical foundation of SAP's Business Process Platform offerings by providing capabilities for service provisioning, composition (service consumption), and governance. Many systems such as SAP ERP, SAP NetWeaver Portal, SAP CRM and SAP SRM are based on SAP NetWeaver Application Server and use it as a runtime environment.

This course is designed to help you understand the technical background of SAP NetWeaver upgrades as well as the tools and general steps needed to perform an upgrade end to end. SAP NetWeaver Upgrades in a Nutshell will explain the purpose of upgrades, how to check for supported platforms, and how to locate guides in SAP Service Marketplace. Learners will also learn how to use the Maintenance Planner, Maintenance Optimizer and Software Update Manager (SUM). Maintenance Planner is a solution hosted by SAP and offers easy maintenance of systems in your landscape. Maintenance Optimizer is in the SAP Solution Manager and leads you through the planning, download and implementation of support package stacks, which contain a set of support packages for your systems. SUM is a multi-purpose tool that supports various processes, such as performing a release upgrade, installing enhancement packages, applying Support Package Stacks or updating single components on SAP NetWeaver. After the course has completed you will understand the entire end to end process for SAP NetWeaver upgrades, making the process simpler for you and your colleagues.

 

SAP NetWeaver Upgrades in a Nutshell is a special edition one week course on openSAP, starting from November 2. All of the content will be available once the course opens and requires approximately 4-6 hours effort in total. Learners can access the content at a time that suits their schedule and to earn a Record of Achievement, the course assignment must be completed before December 1, 2015. This course is provided free of charge and all you need to sign up is a valid email address.

Sign up today and simplify your SAP NetWeaver updates with SAP NetWeaver Upgrades in a Nutshell.

MOPZ no longer selects the latest available Java patches by default

$
0
0

Recently I discovered that MOPZ no longer selects the latest available Java patches for the selected Support Package. It seems that SAP has changed the default behavior in MOPZ so that it selects patch level 0 when choosing stack dependent files. SAP Solution Manager 7.1 SP12 was used as the reference system, I believe it has MOPZ 3.0.

 

I chose to post this blog to SAP NetWeaver Administrator instead of SAP Solution Manager or Application Lifecycle Management because I felt that this is a basis concern and most basis folks are tuned to the selected space.

 

Now what's the big deal? Unless you are aware of the new default behavior you might end up with a NON-WORKING SYSTEM. Sorry SAP, your patch level 0 releases are rarely production quality. I don't remember when was the last time our landscape would have had patch level 0 systems in it. Typically all systems have at least patch 1, some have patch 2 or 3 and we stick with it until it is time to update or upgrade again. As it was when we updated our NetWeaver 7.31 SP10 system to NetWeaver 7.31 SP16, even Portal Content Catalog wasn't operable as demonstrated by the attached screenshot.

 

Portal_Content_Catalog_broken.png


Portal Content Catalog broken after applying SPS16 (patch level 0) for NetWeaver 7.31

 

Okay, got it. Anything else I should be aware of? Unfortunately yes. Unless you use the "Add Java Patches" in the Stack-Dependent step of MOPZ while generating the stack XML, YOU WON'T BE ABLE TO APPLY PATCHES LATER ON, unless you do it manually that is. MOPZ seems to use the selected set of files to determine the available patches. If your system is already at the target level, you won't have any files to select and thus MOPZ won't find any patches. Basically what that meant for us is that we had to restore the system and to start over. I did go through the manual process of selecting the latest patch levels of about 15 components always taking a note of the dependencies. It is however always a gamble to manually patch any system, you don't know for sure whether your defects will be fixed by the selected components nor will you know all possible side effects of the selected components.

 

Any closing words? In my opinion, although I don't know the reasons behind the change, the new behavior is disruptive. I have been using MOPZ together with SUM (and JSPM before that) to patch Java systems for years admitted though that I haven't been actively using them in the past couple of years. That said I have trusted MOPZ to always select the latest patches while generating the stack XML, which it no longer does. MOPZ should at least give a very visible warning that the default behavior has been changed and that Java patches are no longer selected by default.

 

And yes, SAP has released KBA 2022451 to address the topic although it doesn't address/cover everything in my opinion.


Autosave of SAP instances' work folder

$
0
0

Imagine, what if you have a classic or highlyavailableSAP system and it either crashes more than twice or you have to restart it at least two times or due to an issue the HA software does it automatically (a.k.a. switchover or failover)? Well, in this case all the developer traces and logs will be gone including all the important information. These traces and logs are necessary for rootcauseanalysis.

 

How toovercome the above situation?
I've just tested it in one of my test systems and found a way to save these information instead of loosing them.

 

What to consider before doing anything?

Pro

  • SAP saves work folder automatically while startup
  • No 3rd party tool involved
  • Compression
  • Date and time of saving is part of filename

Con

  • File system may overflow
  • You have to manually delete unnecessary archives later on
  • Some delay while startup procedure

 

You want more?
Follow the white rabbit instructions:

  1. Check in /usr/sap/<SID>/<instance>/work/sapstart.log what profile is responsible for starting up SAP.
    Example: Startup Profile: "/usr/sap/<SID>/SYS/profile/<SID>_D<VEBMGS><NR>_<hostname>"
  2. Find a suitable Execute_<NR> line entry for this profile where <NR> is a unique, two-digit number in this profile. Increase the highest available Execute_<NR> by one.
    Example:
    • highest available Execute_<NR> entry is: Execute_08 = <command>
    • next entry will start therefore with: Execute_09 = <command>
  3. Add the following line: Execute_<NR> = <command>
    Example:
    Execute_09 = local $(DIR_CT_RUN)/SAPCAR -ci $(DIR_INSTANCE)/work -F "core*" -f $(DIR_INSTANCE)/work_`date +%Y%m%d_%H%M%S`.SAR

 

Explanation:

  • Execute_09 = local executes the command on localhost. Check this out.
  • $(DIR_CT_RUN)/SAPCAR -ci $(DIR_INSTANCE)/work compresses the work folder and ignore files in use.
  • -F "core*" excludes all core files (these one may quite huge).
  • -f $(DIR_INSTANCE)/work_`date +%Y%m%d_%H%M%S`.SAR defines the path and file name including time stamp.

 

Result:

ls -al /usr/sap/<SID>/<instance>/ | grep -i work

drwxr-xr-x 3 <sid>adm sapsys    12288 Oct 14 15:19 work

-rw-r--r-- 1 <sid>adm sapsys 83832120 Oct 14 15:18 work_20151014_151738.SAR


Enjoy.

Autosave of SAP instances' work folder (on Windows)

$
0
0

As already mentioned in my previous blog post Autosave of SAP instances' work folder sometimes it is really necessary to keep content of work folder. For SAP systems running on Unix (AIX, HP-UX, Oracle-Solaris, Linux) I already have found a workaround. But how about Windows? Well, after some testing I can share my experiences with you. Count with the same Pros and Cons as previously:

Pro

  • SAP saves work folder automatically while startup
  • No 3rd party tool involved
  • Compression
  • Date and time of saving is part of filename

Con

  • File system may overflow
  • You have to manually delete unnecessary archives later on
  • Some delay while startup procedure

 

Are you curious? Let's check the steps.

 

First of all, please note, that it's Windows, therefore some more manual steps will be necessary. Why exactly? Because regional setting, in this case datum and time format may be different on operating system level depending on your region.

 

Okay, and what now? Tell me more, I can't wait. Let the show begin!

 

Steps:

  1. Check in <drive>:\usr\sap\<SID>\D<VEBMGS><NR>\work\sapstart.log what profile is responsible for starting up SAP.
    Example: Startup Profile: "\\<sapglobalhost>\sapmnt\<SID>\SYS\profile\START_D<VEBMGS><NR>_<hostname>"
  2. Find a suitable Start_Program_<NR> line entry for this profile where <NR> is a unique, two-digit number in this profile.
  3. Increase the highest available Start_Program_<NR> by one.
    Example
    • highest available Start_Program_<NR> entry is: Start_Program_04 = <command>
    • next entry will start therefore with: Start_Program_05 = <command>
  4. Add the following line: Start_Program_<NR> = <command>
    Example: Start_Program_05 = immediate cmd /c start /B $(DIR_CT_RUN)\SAPCAR -cfi $(DIR_INSTANCE)\work_"%date:.=%_%time::=%".SAR -C $(DIR_INSTANCE)\work . -F "core*"

    Note:"%date:.=%_%time::=%"
    depends on regional settings of Windows. Check first what echo %date% and echo %time% gives back. All characters which are not allowed in file names on Windows (for example: "/" or ":") must be replaced by other characters or removed. You can define the character to be changed after ":" in variable with character defined after "=". E.g.: outcome echo %time% is 10:23:52,02 with not allowed characters of filename but echo %time::=.% replaces ":" by "." this way outcome is 10.26.58,33. Or echo %time::=% replaces ":" by blank and outcome is 103009,89 which is already a valid name for files. But this is rather for Microsoft than for SAP.

 

Explanation of profile entry and SAPCAR options:

Same is valid as in Autosave of SAP instances' work folder

 

Result:

dir <drive>:\usr\sap\<SID>\D<VEBMGS><NR> | findstr /I "work"

15.10.2015  15:50    <DIR>          work

15.10.2015  15:44           252.201 work_15102015_154442,47.SAR

 

Success. Or rather almost success.


We're already close to the end. If you want to extract the archive, an error appears:

 

SAPCAR -xvf <drive>:\usr\sap\<SID>\D<VEBMGS><NR>\work_15102015_154442,47.SAR -R <path\work_test

SAPCAR: error opening <drive>:\usr\sap\<SID>\D<VEBMGS><NR>\work_15102015_154442 (error 6). The system cannot find the file specified.

SAPCAR: 0 file(s) extracted

SAPCAR: error opening 47.SAR (error 6). The system cannot find the file specified.

SAPCAR: 0 file(s) extracted

 

Just run SAPCAR without arguments and you'll find the answer. "," is the trick.


And what's our trick to overcome this?

  • Rename the archive, remove "," in filename and extract it by SAPCAR
  • Replace "," by "*" in SAPCAR command and it'll work:
    SAPCAR -xf <drive>:\usr\sap\<SID>\D<VEBMGS><NR>\work_15102015_154442*47.SAR -R <path>\work_test
    SAPCAR: processing archive <drive>:\usr\sap\<SID>\D<VEBMGS><NR>\work_15102015_154442,47.SAR (version 2.01)
    SAPCAR: 105 file(s) extracted

 

Oh, yeah.

 

Enjoy

My top tips for 10 times improvement to traditional BDLS Runtime

$
0
0

Overview

 

One of the least exciting requirements of being an SAP Technical consultant is the execution of BDLS scripts after system copies, normally with the refresh of QA system landscapes from a copy of Production. With modern SAN replication it is quite possible to copy a 14TB system online within hours, yet need to wait BDLS for 3~5days whilst the system executes BDLS but just running /nBLDS with default settings.

 

Causation

 

The problem is of course that SAP will perform a full scan every table with a logical system name field each time a BDLS conversion is run, on very large systems this results in runtimes of days particularly when converting multiple source logical system names (eg BWPCLNT100, PRDCLNT100, SCPCLNT100, CRPCLNT100)

 

 

More Info


There is quite some depth but more complex solution(s) in the articles from: Muniraju Hanumanthiah BDLS IN LESS THAN 2 HOURS - Part 1

Muniraju Hanumanthiah

 

 

Top Tips

 

Here are my top tips for BDLS improvements.

 

  1. Naturally run in noarchivelog mode in Oracle or simple backup mode for SQLServer
  2. Build bdls helper indexes on all tables with logical system fields. Kick this off as soon as the system is started and let it run whilst you are performing the other refresh activities (profiles, printers, users etc)
  3. Executing BDLS conversions in parallel in background with variants eg A-K, L-S, T, U-Z , <not A-Z>.
  4. Optionally don't convert (exclude) really BIG tables if they contain transactional data that wont be referenced in your test landscape if so use individual named exclusion list in the BDLS variants and include those in the the "EXCLUDETABLENAME" sql index build script
  5. Once completed drop the bdls helper indexes by simply extracting the "drop index" from the build script below to create a "drop_bdls_index.sql" and run it.

 

 

Oracle

 

Here is my bdls.sql script that generates another sql script which contains create statements indexes on LOGSYS index tables scripts for Oracle.

--------------- bdls.sql --------------------------

set pagesize 0

set lines 255;

set feedback off

column createline format a255;f

column dropline format a255;

column aline format a255;

column rownum noprint

column table_name noprint

 

 

spool create_bdls_ind.sql

prompt spool create_bdls_ind.log

prompt set echo on

prompt set feedback on

prompt set timing on

 

 

select rownum, logsys.table_name,

      'create index sapr3."'||

      'bdls_index_newc'||rownum||'" on sapr3."'||

      logsys.table_name ||

      '"( "'||

      client.column_name ||

      '" , "' ||

      logsys.column_name ||

      '" ) PARALLEL 12  NOLOGGING TABLESPACE PSAPSR3;' createline,

      'alter index sapr3."'||

      'bdls_index_newc'||rownum||'" NOPARALLEL ;' aline,

      'analyze index sapr3."'||

      'bdls_index_newc'||rownum||'" estimate statistics sample 2 PERCENT;' aline,

      '-- drop index sapr3."'||

      'bdls_index_newc'||rownum||'" ;' dropline

from dba_tab_columns logsys, dba_tab_columns client

where (logsys.table_name,logsys.column_name)in

              ( select tabname, fieldname

                  from sapr3.dd03L

                where domname in ('LOGSYS','EDI_PARNUM') )

  and  client.column_name in ('MANDT','CLIENT','RCLNT','MANDANT')

  and  logsys.table_name = client.table_name

  and  logsys.table_name not in

('EXCLUDETABLENAME1_EG_VBAK',

'EXCLUDETABLENAME2_EG_BKPF',

'EXCLUDETABLENAME2_EG_COPE' )

union

  select rownum, logsys.table_name,

    'create index sapr3."'||

      'bdls_index_newnc'||rownum||'" on sapr3."'||

      logsys.table_name ||

      '"(  "' ||

      logsys.column_name ||

      '" ) PARALLEL 12 NOLOGGING TABLESPACE PSAPSR3;' createline,

      'alter index sapr3."'||

      'bdls_index_newnc'||rownum||'" NOPARALLEL ;' aline,

      'analyze index sapr3."'||

      'bdls_index_newnc'||rownum||'" estimate statistics sample 2 PERCENT;' aline,

      '-- drop index sapr3."'||

      'bdls_index_newnc'||rownum||'" ;' dropline

from dba_tab_columns logsys

where (logsys.table_name,logsys.column_name)in

              ( select tabname, fieldname

                  from sapr3.dd03L

                where domname in ('LOGSYS','EDI_PARNUM') )

  and  not exists (select 1 from dba_tab_columns client

              where  client.column_name in ('MANDT','CLIENT','RCLNT','MANDANT')

                    and  logsys.table_name = client.table_name )

  and  logsys.table_name not in

  ('EXCLUDETABLENAME1_EG_VBAK',

'EXCLUDETABLENAME2_EG_BKPF',

'EXCLUDETABLENAME2_EG_COPE' )

order by 1,2

/

 

prompt spool off

 

spool off;

prompt Now start script create_bdls_ind.sql

exit

---------------------------------


For those who are not Oracle  V7 masters ,  script doesn't change anything it is very safe ...it simply builds another sql script called "create_bdls_ind.sql" script that is a long list of create index bdls_index_<uniquenumber> on tables which have logical system name fields (LOGSYS) which BDLS scan and updates.

 

 

 

sqlplus sapr3/password @bdls.sql

 

<review the output file create_bdls_ind.sql>

 

You then (make sure this is on your target system) run the create_bdls_ind.sql with the command

 

sqlplus sapr3/password @create_bdls_ind.sql

 

(Naturally just ignore any errors with errors when it tries to create indexes on views rather than table ...you can prevent this  with a by adding command like "and exists ( select 1 from dba_tables tab where logsys.table_name = tab.table_name), "

 

Oracle Localization

Before running the script you need to localize a few items

  • Schema owner: Globally replace the schema owner string sapr3 with the schema of your system eg sapsr3 or sapprd
  • Tablespace Name: Globally replace the tablespace name "PSAPSR3" with a tablespace of your choice that has sufficient space to create indexes;  aim for about 1/20 of the total size of your tables. So if you have 10TB of table data you will need 500GB free of table-space.
  • Bitmap or Btree: Consider moving 'create index' statements to  'create bitmap index' it should offer small indexes with improved lookup times. However I have used b*tree indexes by default.
  • Parallelism: Change globally the PARALLEL 12 to approximately the number of CPU cores available in your server.  I have seen improvement with setting PARALLEL to double the number of cores, but this is very server specific.
  • PSAPTEMP: Make sure you have a very healthy PSAPTEMP again around 1/20 of the size of your table data.

 

 

 

 

DB2

 

Here is my DB2  bdls.clp script, admittedly it is less refined and doenst have the union search for tables that have logsys but no client (which is relevant to BI/BW systems )but does the job (dont hesitate to improve it and post your update)

 

-------- bdls.clp -------------

 

connect to PRD user sapprd using password ;

values 'connect to PRD user sapprd using password ; ' ;

values 'set current degree = ''10'' ; ' ;

select 'create index sapprd.'||

      'bdls_index_newcd'||row_number() over()||' on sapprd."'||

      logsys.tbname ||

      '"( '||

      client.name ||

      ' , ' ||

      logsys.name ||

      ' ) collect statistics ; ' createline

from sysibm.syscolumns logsys, sysibm.syscolumns client

where (logsys.tbname,logsys.name)in

              ( select tabname, fieldname

                  from sapsrp.dd03L

                where domname in ('LOGSYS','EDI_PARNUM') )

  and  client.name in ('MANDT','CLIENT','RCLNT','MANDANT')

  and  logsys.tbname = client.tbname

  and  logsys.tbname in ( select name from sysibm.systables);

-----------------------


Localization for DB2


  • Instance Name: Replace globally PRD with your DB instance name.
  • Password: Update password with the real cleartext password
  • Object Owner : Replace globally the string sapprd with the name of your database owner which normally remains the old prd schema for DB unless you do a schema conversion.

 

run the script with the  sh script command like  this , which simply runs the clp command and then strips the first 6 characters from the output log to create the index build bdls_run.clp command.

 

 

mv bdls_run_log.clp bdls_run_log.clp.$$

mv bdls_run.clp bdls_run.clp.$$

db2 -tpxnf bdls.clp -z bdls_run_log.clp

cat bdls_run_log.clp | sed '1,6d' > bdls_run.clp


Running the index build shell script

mv bdls_run.log bdls_run.log.$$

db2 -tpxvnf bdls_run.clp -z bdls_run.log

 

 

 

DB index cleanup is with another simple bdls drop.

 

drop_bdls.clp

 

connect to PRD user sapprd using password ;

values 'connect to PRD user sapprd using password ; ' ;

select 'drop index '|| name || ';' dropline

from sysibm.sysindexes ind

  where upper(ind.name) like upper('bdls_index%');

 

 

 

do_drop.sh

mv bdls_do_drop.log bdls_do_drop.log.$$

db2 -tvpxnf bdls_do_drop.clp -z bdls_do_drop.log

 

 

 

 

SQL Server

 

Using SQL Server simply use cut-n-paste from within the SQL Management Studio

 

 

---------------------------------------------------------------------------------------

select concat('create index "' ,

      'bdls_index_newc' , ROW_NUMBER() OVER (ORDER BY (SELECT 'A')), '" on ers."' ,

      logsys.TABLE_NAME ,

      '"( "',

      client.COLUMN_NAME ,

      '" , "' ,

      logsys.COLUMN_NAME ,

      '" ) WITH (MAXDOP=8) ; ' ) createline,

      concat('drop index "' ,

      'bdls_index_newc', ROW_NUMBER() OVER (ORDER BY (SELECT 10000)) , '" on ers."', logsys.TABLE_NAME,'" ; ') dropline

from INFORMATION_SCHEMA.COLUMNS logsys, INFORMATION_SCHEMA.COLUMNS client

where concat(logsys.TABLE_NAME , '||' ,  logsys.COLUMN_NAME ) in

              ( select concat( TABNAME , '||' ,  FIELDNAME )

                  from ers.DD03L

                where DD03L.DOMNAME in ('LOGSYS','EDI_PARNUM') )

  and  client.COLUMN_NAME in ('MANDT','CLIENT','RCLNT','MANDANT')

  and  logsys.TABLE_NAME = client.TABLE_NAME

  and  logsys.TABLE_NAME not in

('EXCLUDETABLENAM1_EG_VBAK',

'EXCLUDETABLENAME2_EG_BKPF',

'EXCLUDETABLENAME2_EG_COPE' )

and exists ( select 1 from INFORMATION_SCHEMA.TABLES tables where logsys.TABLE_NAME = tables.TABLE_NAME and TABLE_TYPE = 'BASE TABLE')

union

  select

    concat('create index "',

      'bdls_index_newnc', ROW_NUMBER() OVER (ORDER BY (SELECT 10000)) , '" on ers."',

      logsys.TABLE_NAME ,

      '"(  "' ,

      logsys.COLUMN_NAME ,

      '" ) WITH (MAXDOP=8) ;  ') createline,

          concat('drop index "',

      'bdls_index_newnc',ROW_NUMBER() OVER (ORDER BY (SELECT 10000)), '" on ers."', logsys.TABLE_NAME,'" ;') dropline

from INFORMATION_SCHEMA.COLUMNS logsys

where concat(logsys.TABLE_NAME , '||' , logsys.COLUMN_NAME ) in

              ( select concat(TABNAME , '||' , FIELDNAME )

                  from ers.DD03L

                where DOMNAME in ('LOGSYS','EDI_PARNUM') )

  and  not exists (select 1 from INFORMATION_SCHEMA.COLUMNS client

              where  client.COLUMN_NAME in ('MANDT','CLIENT','RCLNT','MANDANT')

                    and  logsys.TABLE_NAME = client.TABLE_NAME )

  and  logsys.TABLE_NAME not in

  ('EXCLUDETABLENAM1_EG_VBAK',

'EXCLUDETABLENAME2_EG_BKPF',

'EXCLUDETABLENAME2_EG_COPE' )

  and exists ( select 1 from INFORMATION_SCHEMA.TABLES tables where logsys.TABLE_NAME = tables.TABLE_NAME and TABLE_TYPE = 'BASE TABLE')

order by 1,2

---------------------------------------------------------------------------------------

 

SQL Server Localization

 

  • Owner: Globally replace owner string 'ers' with the owner of your tables eg sapprd or srq.
  • Parallelism:: Replace MAXDOP 8 with the parallelization your require maybe 4*CPU cores.

 

Normally I simply run the script and cut the createline output into another sql window and run it,  delete index is done with the dropline

 

 

bdls.png

 

bdlsi.png
bdlsd.png





HANA

 

For SAP ECC systems tables still in row store would benefit from bdls indexes as well. Even in memory is improved with optimal access of data without needing to perform huge in memory buffer scans.  That being said the HANA systems I have worked on so far are all BW analytics so I have yet to need to perform regular system refreshes.  Please post your experiences

Final Comments.

 

Looking forward to everyone to comment and suggest improvements or give warnings. I welcome comments.

 

Further improvements can be made with doing updates directly with sql statements using parallel statements eg

update /*+ PARALLEL(VBAK,12) */  VBAK set LOGSYS='TSTCLNT200' where LOGSYS='PRDCLNT100';

However this is of course updating SAP directly by database which is traditionally discouraged.

 

 

 

 



SAP Kernel update on Unix for ABAP

$
0
0

I recently have updated the SAP kernel of one of my test systems and I would like to share my experiences with you. In my case it worked well, therefore if you follow the same steps it will work for you, I promise.

 

What I did was following:

 

Step 1: Check the current kernel release and patch level

 

Login to SAP and choose menu System -> Status:

 

Kernel release    742
Sup.Pkg lvl.      200

 

DBSL Patch Level  110

 

So my system was on SAP Kernel release 7.42 with patch level 200 and DBSL on patch level 110.

 

Step 2: Download the new kernel

 

They are available under http://support.sap.com/swdc -> Support Packages & Patches -> Browse Download Catalog:

SPP.png

BDC.png

Here navigate to Additional Components -> SAP Kernel and choose the corresponding bit version (32 or 64) make variant (empty or EXT) and unicode or nonunicode (empty or UNICODE | UC) for your system.

 

AC.png

SK.png

SKb.png

The easiest way to get this information is to run disp+work command on operating system level with sidadm user or from SAP system using RSBDCOS0 report. For example:

 

[1]disp+work
disp+work information
kernel release                742
kernel make variant          742_REL
compiled on                  Linux GNU SLES-11 x86_64 cc4.3.4 use-pr150508 for linuxx86_64
compiled for                  64 BIT
compilation mode              UNICODE
compile time                  Jun 29 2015 23:01:53
update level                  0
patch number                  200
source id                    0.200
RKS compatibility level      0

 

After choosing the correct kernel path navigate to corresponding kernel release, operating system and choose #Database_independent and database specific links:

SKc.png

OS.png:

DBI-DBS.png

I used SAPEXE.SAR and SAPEXEDB.SAR archives for updating. In addition, dw.sar lib_dbsl.sar and further archives can be used, but only the ones which are newer than the latest SAPEXE(DB) packages. All other executables are already part of SAPEXE(DB).

 

EXE.png

EXEDB.png

Download them to your SAP server.

 

Step 3: Make backup of current kernel

 

Check via AL11 transaction where DIR_CT_RUN (central kernel folder) points to:

 

DIR_CT_RUN /usr/sap/SID/SYS/exe/uc/linuxx86_64

 

This is where the kernel will be distributed to DIR_EXECUTABLE via sapcpe from.

 

Make backup of current DIR_CT_RUN:

 

hostname:sidadm 14> ls -al

total 306260

drwxr-xr-x 3 sidadm sapsys      4096 Nov 11 08:24 .

drwxr-xr-x 5 sidadm sapsys      4096 Jan 26  2015 ..

drwxr-xr-x 5 sidadm sapsys    12288 Oct  2 10:05 linuxx86_64

-rw-r--r-- 1 sidadm sapsys 298799041 Nov 11 08:17 SAPEXE_300-20012215.SAR

-rw-r--r-- 1 sidadm sapsys  14468589 Nov 11 08:17 SAPEXEDB_300-20012214.SAR

 

hostname:sidadm 15> mkdir linuxx86_64-bckp

 

hostname:sidadm 17> ls -al

total 306264

drwxr-xr-x 4 sidadm sapsys      4096 Nov 11 08:25 .

drwxr-xr-x 5 sidadm sapsys      4096 Jan 26  2015 ..

drwxr-xr-x 5 sidadm sapsys    12288 Oct  2 10:05 linuxx86_64

drwxr-xr-x 2 sidadm sapsys      4096 Nov 11 08:25 linuxx86_64-bckp

-rw-r--r-- 1 sidadm sapsys 298799041 Nov 11 08:17 SAPEXE_300-20012215.SAR

-rw-r--r-- 1 sidadm sapsys  14468589 Nov 11 08:17 SAPEXEDB_300-20012214.SAR

 

hostname:sidadm 18> cp -R linuxx86_64/* linuxx86_64-bckp

 

Step 4: Stop SAP system

 

hostname:sidadm 22> stopsap r3

Checking SID Database

Database is running

-------------------------------------------

stopping the SAP instance D02

Shutdown-Log is written to /home/sidadm/stopsap_D02.log

-------------------------------------------

/usr/sap/SID/D02/exe/sapcontrol -prot NI_HTTP -nr 02 -function Stop

Instance on host hostname stopped

Waiting for cleanup of resources

.............

stopping the SAP instance DVEBMGS00

Shutdown-Log is written to /home/sidadm/stopsap_DVEBMGS00.log

-------------------------------------------

/usr/sap/SID/DVEBMGS00/exe/sapcontrol -prot NI_HTTP -nr 00 -function Stop

Instance on host hostname stopped

Waiting for cleanup of resources

.....

stopping the SAP instance ASCS01

Shutdown-Log is written to /home/sidadm/stopsap_ASCS01.log

-------------------------------------------

/usr/sap/SID/ASCS01/exe/sapcontrol -prot NI_HTTP -nr 01 -function Stop

Instance on host hostname stopped

Waiting for cleanup of resources

..

 

Step 5: Extract kernel archives

 

First copy SAPCAR from DIR_CT_RUN outside of DIR_CT_RUN. This way extraction won't stop due to "text file busy" error.

 

hostname:sidadm 29> cp linuxx86_64/SAPCAR SAPCAR

 

hostname:sidadm 30> ls -al

total 310552

drwxr-xr-x 4 sidadm sapsys      4096 Nov 11 08:43 .

drwxr-xr-x 5 sidadm sapsys      4096 Jan 26  2015 ..

drwxr-xr-x 5 sidadm sapsys     12288 Oct  2 10:05 linuxx86_64

drwxr-xr-x 5 sidadm sapsys     12288 Nov 11 08:26 linuxx86_64-bckp

-rwxr-xr-x 1 sidadm sapsys   4367136 Nov 11 08:43 SAPCAR

-rw-r--r-- 1 sidadm sapsys 298799041 Nov 11 08:17 SAPEXE_300-20012215.SAR

-rw-r--r-- 1 sidadm sapsys  14468589 Nov 11 08:17 SAPEXEDB_300-20012214.SAR

hostname:sidadm 32> ./SAPCAR -xfj SAPEXE_300-20012215.SAR -R ./linuxx86_64

 

Then extract the SAPEXE(DB) archives in DIR_CT_RUN:

 

SAPCAR: processing archive SAPEXE_300-20012215.SAR (version 2.01)

SAPCAR: 740 file(s) extracted

 

hostname:sidadm 33> ./SAPCAR -xfj SAPEXEDB_300-20012214.SAR -R linuxx86_64

SAPCAR: processing archive SAPEXEDB_300-20012214.SAR (version 2.01)

SAPCAR: 26 file(s) extracted


If you want to extract other archives as well check point 8. in SAP Kernel Update on Unix and Linux - Application Server Infrastructure - SCN Wiki


Step 6: Adjust permissions by root user

 

hostname:/usr/sap/SID/SYS/exe/uc # chmod -R 755 linuxx86_64

 

hostname:/usr/sap/SID/SYS/exe/uc # chown -R sidadm:sapsys linuxx86_64

 

hostname:/usr/sap/SID/SYS/exe/uc # ./linuxx86_64/saproot.sh SID

Preparing /usr/sap/SID/SYS/exe/uc/linuxx86_64/brbackup ...

Preparing /usr/sap/SID/SYS/exe/uc/linuxx86_64/brarchive ...

Preparing /usr/sap/SID/SYS/exe/uc/linuxx86_64/brconnect ...

Preparing /usr/sap/SID/SYS/exe/run/brbackup ...

Preparing /usr/sap/SID/SYS/exe/run/brarchive ...

Preparing /usr/sap/SID/SYS/exe/run/brconnect ...

Preparing icmbnd ...

icmbnd.new does not exist - skipped

Set user ID bit on /usr/sap/SID/DVEBMGS00/exe/sapuxuserchk

Set user ID bit on /usr/sap/SID/D02/exe/sapuxuserchk

Set user ID bit on /usr/sap/SID/ASCS01/exe/sapuxuserchk

done

 

Step 7: Start SAP system

 

hostname:sidadm 46> startsap r3

Checking SID Database

Database is running

-------------------------------------------

Starting Startup Agent sapstartsrv

OK

Instance Service on host hostname started

-------------------------------------------

starting SAP Instance ASCS01

Startup-Log is written to /home/sidadm/startsap_ASCS01.log

-------------------------------------------

/usr/sap/SID/ASCS01/exe/sapcontrol -prot NI_HTTP -nr 01 -function Start

Instance on host hostname started

Starting Startup Agent sapstartsrv

OK

Instance Service on host hostname started

-------------------------------------------

starting SAP Instance DVEBMGS00

Startup-Log is written to /home/sidadm/startsap_DVEBMGS00.log

-------------------------------------------

/usr/sap/SID/DVEBMGS00/exe/sapcontrol -prot NI_HTTP -nr 00 -function Start

Instance on host hostname started

Starting Startup Agent sapstartsrv

OK

Instance Service on host hostname started

-------------------------------------------

starting SAP Instance D02

Startup-Log is written to /home/sidadm/startsap_D02.log

-------------------------------------------

/usr/sap/SID/D02/exe/sapcontrol -prot NI_HTTP -nr 02 -function Start

Instance on host hostname started

 

Step 8: Check new kernel patch level

 

Login to SAP and choose menu System -> Status:

 

Kernel release    742

Sup.Pkg lvl.      300

 

DBSL Patch Level  300

 

Step 9: Enjoy

 

ENJOY.jpg

This system's system number is "1234567890" but there is a license key for system number "0987654321" in the license key file.

$
0
0

I just wanted to install a new incense in my test system. I deleted the existing ones and tried to install the new one in SLICENSE transaction. The new license came with different system number.

 

 

The problem

 

Following error message appeared:

This system's system number is "<10-digit number>" but there is a license key for system number "<another 10-digit number>" in the license key file.

 

 

The background

 

The reason behind was that "System Number" could not get empty but even after deleting the license it was the same as before.

 

  • I checked the content of license table using
    saplikey pf=/sapmnt/<SID>/profile/<instance profile> -show
    command
  • I tried to delete it again using
    saplikey pf=/sapmnt//profile/<instance profile> -delete "*" "*" "*"
    command
  • Tried to empty table buffer via /$TAB in OK-code field
  • Tried to empty all buffers via /$SYNC in OK code field

but none of them was successful.

 

 

The solution

 

  • I had to restart SAP system.
  • Afterwards system number got empty ("System No. Empty") and I could install the new license normally.
  • A subsequent restart normalized the data displayed in SLICENSE.

 

Hope this information will be helpful for you as well.

Work process architecture and What is a Dialog step?

$
0
0

Hi Guys,

 

We all might have come across of many architectural information on various SAP component.

Work process is one of the main component of SAP WEB AS architecture and today I would like to share some information on work process...their architecture and dialog step.

 

Work Process Architecture: Work process architecture comprise 3 main components as shown in the figure below

  1. Dialog Interpreter
  2. ABAP Processor
  3. Database Interface

Untitled.jpg


  • Task Handler: It coordinates the activities within a work process. It manages the loading and unloading of the user session context at the beginning and end of each dialog step. It also communicates with the dispatcher and activates the Dynpro interpreter or the ABAP processor as required to perform its tasks.

 

  • ABAP processor: Is in charge of executing the ABAP programs

 

  • Dialog interpreter: (Also known as the Dynpro processor) is in charge of interpreting and executing the logic of R/3 screens.

 

  • Database interface: It establishes and terminates connection between the work process and database. It also allows to Access to database tables, access to Repository objects , Controlling transactions (commit and rollback handling)

 

What is a Dialog Step:

Untitled.jpg


  1. The dispatcher classifies the request and places it in the appropriate request queue
  2. The request is passed in order of receipt to a free dialog work process
  3. The subprocess “taskhandler” restores the user context in a step known as “roll in”. The user context contains mainly data from currently running transactions called by this user and its authorizations
  4. The taskhandler calls the Dynpro processor to convert the screen data to ABAP variables
  5. The ABAP processor executes the coding of the “Process after Input” module (PAI module) from the preceding screen, along with the “Process before Output” module (PBO module) of the following screen
  6. It also communicates, if necessary, with the database
  7. The Dynpro processor then converts the ABAP variables again to screen fields. When the Dynpro Processor has finished its task, the taskhandler becomes active again
  8. The current user context is stored by the taskhandler in shared memory (roll out)
  9. Resulting data is returned through the dispatcher to the front end

 

 

That's all folks !!!!!

 

Please feel free to add/correct the information placed above. Thank you.

 

 

Regards,

Prithviraj.

List of HTTP Status Code

$
0
0

Hi All,

 

We all might have come across some annoying, half explained error/status codes while processing HTTP request and than with this limited information available we have to go through internet to find the correct/close fitting problem to ours.

 

While doing so most of us don't know what exactly the error code tells us and thus the search filters are very limited and result is vast, so to have a better understanding over HTTP status code I have listed all of them below to help us understand what exactly they want to tell us.

 

There are five major category of HTTP status/error codes, listed below....

  • 1xx Informational
  • 2xx Success
  • 3xx Redirection
  • 4xx Client Error
  • 5xx Server Error


1xx Informational

Request received, continuing process.

This class of status code indicates a provisional response, consisting only of the Status-Line and optional headers, and is terminated by an empty line. Since HTTP/1.0 did not define any 1xx status codes, servers must not send a 1xx response to an HTTP/1.0 client except under experimental conditions.

 

100 Continue

This means that the server has received the request headers, and that the client should proceed to send the request body (in the case of a request for which a body needs to be sent; for example, a POST request). If the request body is large, sending it to a server when a request has already been rejected based upon inappropriate headers is inefficient. To have a server check if the request could be accepted based on the request's headers alone, a client must send Expect: 100-continue as a header in its initial request and check if a 100 Continue status code is received in response before continuing (or receive 417 Expectation Failed and not continue).

101 Switching Protocols

This means the requester has asked the server to switch protocols and the server is acknowledging that it will do so.

 

102 Processing

As a WebDAV request may contain many sub-requests involving file operations, it may take a long time to complete the request. This code indicates that the server has received and is processing the request, but no response is available yet. This prevents the client from timing out and assuming the request was lost.

 

 

2xx Success

This class of status codes indicates the action requested by the client was received, understood, accepted and processed successfully.

 

200 OK

Standard response for successful HTTP requests. The actual response will depend on the request method used. In a GET request, the response will contain an entity corresponding to the requested resource. In a POST request the response will contain an entity describing or containing the result of the action.


201 Created

The request has been fulfilled and resulted in a new resource being created.


202 Accepted

The request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place.

 

203 Non-Authoritative Information (since HTTP/1.1)

The server successfully processed the request, but is returning information that may be from another source.

 

204 No Content

The server successfully processed the request, but is not returning any content. Usually used as a response to a successful delete request.

205 Reset Content

The server successfully processed the request, but is not returning any content. Unlike a 204 response, this response requires that the requester reset the document view.

 

206 Partial Content

The server is delivering only part of the resource (byte serving) due to a range header sent by the client. The range header is used by tools like wget to enable resuming of interrupted downloads, or split a download into multiple simultaneous streams.

 

207 Multi-Status

The message body that follows is an XML message and can contain a number of separate response codes, depending on how many sub-requests were made.

 

208 Already Reported

The members of a DAV binding have already been enumerated in a previous reply to this request, and are not being included again.

 

226 IM Used

The server has fulfilled a request for the resource, and the response is a representation of the result of one or more instance-manipulations applied to the current instance.

 

 

3xx Redirection

This class of status code indicates the client must take additional action to complete the request. Many of these status codes are used in URL redirection.

A user agent may carry out the additional action with no user interaction only if the method used in the second request is GET or HEAD. A user agent should not automatically redirect a request more than five times, since such re-directions usually indicate an infinite loop.

 

300 Multiple Choices

Indicates multiple options for the resource that the client may follow. It, for instance, could be used to present different format options for video, list files with different extensions, or word sense disambiguation.

 

301 Moved Permanently

This and all future requests should be directed to the given URL.

 

302 Found

This is an example of industry practice contradicting the standard. The HTTP/1.0 specification (RFC 1945) required the client to perform a temporary redirect (the original describing phrase was "Moved temporarily"), but popular browsers implemented 302 with the functionality of a 303 See Other. Therefore, HTTP/1.1 added status codes 303 and 307 to distinguish between the two behaviors. However, some Web applications and frameworks use the 302 status code as if it were the 303.

 

303 See Other (since HTTP/1.1)

The response to the request can be found under another URL using a GET method. When received in response to a POST (or PUT/DELETE), it should be assumed that the server has received the data and the redirect should be issued with a separate GET message.

 

304 Not Modified

Indicates that the resource has not been modified since the version specified by the request headers If-Modified-Since or If-None-Match. This means that there is no need to re-transmit the resource, since the client still has a previously-downloaded copy.

 

305 Use Proxy (since HTTP/1.1)

The requested resource is only available through a proxy, whose address is provided in the response. Many HTTP clients (such as Mozilla and Internet Explorer) do not correctly handle responses with this status code, primarily for security reasons.

 

306 Switch Proxy

No longer used. Originally meant "Subsequent requests should use the specified proxy."

 

307 Temporary Redirect (since HTTP/1.1)

In this case, the request should be repeated with another URI; however, future requests should still use the original URI. In contrast to how 302 were historically implemented, the request method is not allowed to be changed when reissuing the original request. For instance, a POST request should be repeated using another POST request.

 

308 Permanent Redirect

The request, and all future requests should be repeated using another URI. 307 and 308 (as proposed) parallel the behaviours of 302 and 301, but do not allow the HTTP method to change. So, for example, submitting a form to a permanently redirected resource may continue smoothly.

 

 

4xx Client Error

The 4xx class of status code is intended for cases in which the client seems to have errored. Except when responding to a HEAD request, the server should include an entity containing an explanation of the error situation, and whether it is a temporary or permanent condition. These status codes are applicable to any request method. User agents should display any included entity to the user.

 

400 Bad Request

The server cannot or will not process the request due to something that is perceived to be a client error.

 

401 Unauthorized

Similar to 403 Forbidden, but specifically for use when authentication is required and has failed or has not yet been provided. The response must include a WWW-Authenticate header field containing a challenge applicable to the requested resource. See Basic access authentication and Digest access authentication.

 

402 Payment Required

Reserved for future use. The original intention was that this code might be used as part of some form of digital cash or micropayment scheme, but that has not happened, and this code is not usually used. YouTube uses this status if a particular IP address has made excessive requests, and requires the person to enter a CAPTCHA.

 

403 Forbidden

The request was a valid request, but the server is refusing to respond to it. Unlike a 401 Unauthorized response, authenticating will make no difference.

 

404 Not Found

The requested resource could not be found but may be available again in the future. Subsequent requests by the client are permissible.

 

405 Method Not Allowed

A request was made of a resource using a request method not supported by that resource; for example, using GET on a form which requires data to be presented via POST, or using PUT on a read-only resource.

 

406 Not Acceptable

The requested resource is only capable of generating content not acceptable according to the Accept headers sent in the request.

 

407 Proxy Authentication Required

The client must first authenticate itself with the proxy.

 

408 Request Timeout

The server timed out waiting for the request. According to HTTP specifications: "The client did not produce a request within the time that the server was prepared to wait. The client MAY repeat the request without modifications at any later time."

 

409 Conflict

Indicates that the request could not be processed because of conflict in the request, such as an edit conflict in the case of multiple updates.

 

410 Gone

Indicates that the resource requested is no longer available and will not be available again. This should be used when a resource has been intentionally removed and the resource should be purged. Upon receiving a 410 status code, the client should not request the resource again in the future. Clients such as search engines should remove the resource from their indices. Most use cases do not require clients and search engines to purge the resource, and a "404 Not Found" may be used instead.

 

411 Length Required

The request did not specify the length of its content, which is required by the requested resource.

 

412 Precondition Failed

The server does not meet one of the preconditions that the requester put on the request.

 

413 Request Entity Too Large

The request is larger than the server is willing or able to process.

 

414 Request-URL Too Long

The URL provided was too long for the server to process. Often the result of too much data being encoded as a query-string of a GET request, in which case it should be converted to a POST request.

 

415 Unsupported Media Type

The request entity has a media type which the server or resource does not support. For example, the client uploads an image as image/svg+xml, but the server requires that images use a different format.

 

416 Requested Range Not Satisfiable

The client has asked for a portion of the file (byte serving), but the server cannot supply that portion. For example, if the client asked for a part of the file that lies beyond the end of the file.

 

417 Expectation Failed

The server cannot meet the requirements of the Expect request-header field.

 

418 I'm a teapot (RFC 2324)

This code was defined in 1998 as one of the traditional IETF April Fools' jokes, in RFC 2324, Hyper Text Coffee Pot Control Protocol, and is not expected to be implemented by actual HTTP servers.

 

419 Authentication Timeout

Not a part of the HTTP standard, 419 Authentication Timeout denotes that previously valid authentication has expired. It is used as an alternative to 401 Unauthorized in order to differentiate from otherwise authenticated clients being denied access to specific server resources

 

420 Method Failure (Spring Framework)

Not part of the HTTP standard, but defined by spring in the HttpStatus class to be used when a method failed. This status code is deprecated by spring.

 

420 Enhance Your Calm (Twitter)

Not part of the HTTP standard, but returned by version 1 of the Twitter Search and Trends API when the client is being rate limited. Other services may wish to implement the 429 Too Many Requests response code instead.

 

422 Un-processable Entity

The request was well-formed but was unable to be followed due to semantic errors

 

423 Locked

The resource that is being accessed is locked.

 

424 Failed Dependency (WebDAV; RFC 4918)

The request failed due to failure of a previous request (e.g., a PROPPATCH).

 

426 Upgrade Required

The client should switch to a different protocol such as TLS/1.0.

 

428 Precondition Required

The origin server requires the request to be conditional. Intended to prevent "the 'lost update' problem, where a client GETs a resource's state, modifies it, and PUTs it back to the server, when meanwhile a third party has modified the state on the server, leading to a conflict."

 

429 Too Many Requests

The user has sent too many requests in a given amount of time. Intended for use with rate limiting schemes.

 

431 Request Header Fields Too Large

The server is unwilling to process the request because either an individual header field or all the header fields collectively, are too large.

 

440 Login Timeout (Microsoft)

A Microsoft extension. Indicates that your session has expired.

 

444 No Response (Nginx)

Used in Nginx logs to indicate that the server has returned no information to the client and closed the connection (useful as a deterrent for malware).

 

449 Retry With (Microsoft)

A Microsoft extension. The request should be retried after performing the appropriate action.

Often search-engines or custom applications will ignore required parameters. Where no default action is appropriate, the Aviongoo website sends a "HTTP/1.1 449 Retry with valid parameters: param1, param2 . . .” response. The applications may choose to learn, or not.

 

450 Blocked by Windows Parental Controls (Microsoft)

A Microsoft extension. This error is given when Windows Parental Controls are turned on and are blocking access to the given webpage.

451 Unavailable For Legal Reasons (Internet draft)

Defined in the internet draft "A New HTTP Status Code for Legally-restricted Resources" Intended to be used when resource access is denied for legal reasons, e.g. censorship or government-mandated blocked access. A reference to the 1953 dystopian novel Fahrenheit 451, where books are outlawed

 

451 Redirect (Microsoft)

Used in Exchange ActiveSync if there either is a more efficient server to use or the server cannot access the users' mailbox.

The client is supposed to re-run the HTTP Autodiscovery protocol to find a better suited server

 

494 Request Header Too Large (Nginx)

Nginx internal code similar to 431 but it was introduced earlier in version 0.9.4 (on January 21, 2011)

 

495 Cert Error (Nginx)

Nginx internal code used when SSL client certificate error occurred to distinguish it from 4XX in a log and an error page redirection.

 

496 No Cert (Nginx)

Nginx internal code used when client didn't provide certificate to distinguish it from 4XX in a log and an error page redirection.

 

497 HTTP to HTTPS (Nginx)

Nginx internal code used for the plain HTTP requests that are sent to HTTPS port to distinguish it from 4XX in a log and an error page redirection.

 

498 Token expired/invalid (Esri)

Returned by ArcGIS for Server. A code of 498 indicates an expired or otherwise invalid token.

 

499 Client Closed Request (Nginx)

Used in Nginx logs to indicate when the connection has been closed by client while the server is still processing its request, making server unable to send a status code back.

 

499 Token required (Esri)

Returned by ArcGIS for Server. A code of 499 indicates that a token is required (if no token was submitted).

 

 

5xx Server Error

The server failed to fulfill an apparently valid request.

Response status codes beginning with the digit "5" indicate cases in which the server is aware that it has encountered an error or is otherwise incapable of performing the request. Except when responding to a HEAD request, the server should include an entity containing an explanation of the error situation, and indicate whether it is a temporary or permanent condition. Likewise, user agents should display any included entity to the user. These response codes are applicable to any request method.

 

500 Internal Server Error

A generic error message, given when an unexpected condition was encountered and no more specific message is suitable.

 

501 Not Implemented

The server either does not recognize the request method, or it lacks the ability to fulfill the request. Usually this implies future availability (e.g., a new feature of a web-service API).

 

502 Bad Gateway

The server was acting as a gateway or proxy and received an invalid response from the upstream server.

 

503 Service Unavailable

The server is currently unavailable (because it is overloaded or down for maintenance). Generally, this is a temporary state.

 

504 Gateway Timeout

The server was acting as a gateway or proxy and did not receive a timely response from the upstream server.

 

505 HTTP Version Not Supported

The server does not support the HTTP protocol version used in the request.

 

506 Variant Also Negotiates

Transparent content negotiation for the request results in a circular reference.

 

507 Insufficient Storage

The server is unable to store the representation needed to complete the request.

508 Loop Detected (WebDAV; RFC 5842)

The server detected an infinite loop while processing the request (sent in lieu of 208 Already Reported).

 

509 Bandwidth Limit Exceeded (Apache bw/limited extension)

This status code is not specified in any RFCs. Its use is unknown.

 

510 Not Extended

Further extensions to the request are required for the server to fulfil it.

 

511 Network Authentication Required

The client needs to authenticate to gain network access. Intended for use by intercepting proxies used to control access to the network (e.g., "captive portals" used to require agreement to Terms of Service before granting full Internet access via a Wi-Fi hotspot).

 

520 Origin Error (CloudFlare)

This status code is not specified in any RFCs, but is used by CloudFlare's reverse proxies to signal an "unknown connection issue between CloudFlare and the origin web server" to a client in front of the proxy.

 

521 Web server is down (CloudFlare)

This status code is not specified in any RFCs, but is used by CloudFlare’s reverse proxies to indicate that the origin webserver refused the connection.

 

522 Connection timed out (CloudFlare)

This status code is not specified in any RFCs, but is used by CloudFlare’s reverse proxies to signal that a server connection timed out.

 

523 Proxy Declined Request (CloudFlare)

This status code is not specified in any RFCs, but is used by CloudFlare’ s reverse proxies to signal a resource that has been blocked by the administrator of the website or proxy itself.

 

524 A timeout occurred (CloudFlare)

This status code is not specified in any RFCs, but is used by cloud Flare’s reverse proxies to signal a network read timeout behind the proxy to a client in front of the proxy.

 

598 Network read timeout error (Unknown)

This status code is not specified in any RFCs, but is used by Microsoft HTTP proxies to signal a network read timeout behind the proxy to a client in front of the proxy.

 

599 Network connect timeout error (Unknown)

This status code is not specified in any RFCs, but is used by Microsoft HTTP proxies to signal a network connect timeout behind the proxy to a client in front of the proxy.

 

Hope this benefits you all.

 

 

Regards,

Prithviraj.


Output Controller - SAP Spool auto delete

$
0
0

Dear All

 

     Hope every SAP basis consultants / users do well . may i'm still beginner in SAP Basis world , but i think it is very good to share our knowledge between us .

 

Note : if you get bored of reading long articles , kindly forgive me .

 

one of my big issue i daily faced in our SAP system . ( Spool is full )

 

in our SAP system we have service user for third party application for Mobility sales using Handheld terminals , this user allow the third party to do some tasks in SAP ( create Sales orders , print Sales order , create Invoice , cash Journal tasks , ..... ) usually printing done through special printers connected to Handheld , but also Sales orders printed virtually in our SAP system with Output Stat " - " which mean " - Not yet sent to the host system (no output request exists ".

 

from my deep careful reading to solve this issue , i got three solutions . one is temporary solution , the second not bad solution , the last one is wonderful solution

 

First solution :

delete old spool requests manually

Tcode SP01 - and enter old period the execute

11.JPG

 

the select all spool requests no. , then delete

12.JPG

 

as you see , it is not perfect solution .

 


Second solution :

maximize the spool range number .


use Tcode SNRO , object: SPO_NUM then use NUMBER RANGE option .

13.JPG

 

press INTERVALS change option
14.JPG


you can increase the range of intervals

15.JPG

as you see it is not the best solution , because you may forget the maximum range number & in future you may surprise that the spool is full .

 

Third Solution :

briefly it depend on delete old spool requests automatically , i created periodic job for executing SAP standard program with my own Variant .

 

use Tcode SA38 , for execute RSPO1041 program

 

you can customize control the program ( user , server , client , spool request state , days , .... ) then save your Variant

16.JPG

you can create Background Job with your estimated execution time  , to execute RSPO1041 program with your own saved Variant .

17.JPG

 

............................................

 

RZ20 may useful for you to check the used spool requests parentage

 

19.JPG

18.JPG

 

 

 

kindly forgive me for bad English typing or meaning .

 

Finally, hope this was helpful .

 

appreciate for all your time

ASCS ENQ performance tuning

$
0
0

Hello SAP Administrators,

 

the ASCS is not really a new topic, but I guess most of you never changed ASCS parameters after the installation/migration of it. In the most cases the defaults are good enough, but for big system environments you have to optimize them.

In a good sizing/parametrization you should take care also of the ASCS parameters!

 

Basics

ASCS

- Messages Server

- Enqueue Server

 

Every new system will be installed with ASCS and the option for ERS. ERS is the Enqueue Replication Service which take place in a cluster scenario.

The future is the standalone enqueue server which is just another name for the ENQ service inside the ASCS.

There are a lot of documents and notes regarding the enqueue server and I will just collect them here with some hidden parameters and also show you my tests.

In the past integrated ENQ Server can be administrated in the CI ABAP profile via RZ10 or directly in the filesystem.

The new ASCS can only be configured via the profile on the filesystem. It is not visible anymore via RZ11/RZ10.

Don't believe what you see in RZ11! You just see the defaults or any old value which were note deleted from the profile.

It is essential that the binaries from (A)SCS and ERS instance are from the same binary set.

Please delete all old enqueue parameters from default and instance profiles!

 

Test environment:

Kernel Release 742 SP210

 

PAS with ASCS (NUC)

AIX 7.1

DB2 10.5 FP3/4

120WP

 

application server

20x linux vmware

150WP per server

=>3000WP

 

Calculate some parameters

 

1) calculation of enque/server/max_requests

workprocesses + enqueue table size in MB * 20|25 (NUC|UC)

=> 3120 + 1024 * 20 (because we have a NUC system)

=> 23600

 

2) calculation of the snapshot area

- the snapshot memory area must be greater than the enqueue table size

- The size of this memory area is the multiplication of parameters enque/snapshot_pck_size and enque/snapshot_pck_ids

- There is no restriction or recommendation about the better number of packages and its size because this depends on the business process

- 1903553 - Standalone Enqueue Server (ENSA) and snapshot packages

 

ENQ table size: 1024MB

snapshot memory=enque/snapshot_pck_ids*enque/snapshot_pck_size

=> default size = 10.000 * 50.000 = 500MB

=> just edit the parameters if your ENQ table size is above 500MB

=> the package size is :

1.000.000*80.000

80.000.000 KB

80 GB

 

=> so in this case we can definitely reduce the parameter, but our tests have shown that this settings are working pretty good

 

 

 

3) Client profile (application Server):

enque/process_location = REMOTESA

enque/serverhost = <hostname of the enqueue-server>

enque/serverinst = <instance number of the enqueue-server>

enque/deque_wait_answer = TRUE

 

 

4) ASCS instance profile:

#If you set this parameter to 0, the lock table is not placed in any pool

ipc/shm_psize_34 = 0

 

#default 50KB => parameter value 50.000 (30.000 and 100.000 possible values)- determines the size of

#the individual packets. A lock entry requires about 1 KB.

enque/snapshot_pck_size = 80000

 

#enqueue table size

enque/table_size = 1024000

 

#default: 10.000 (10-1.000.000 possible) => newer releases default: 1600 - determines the

#maximum number of snapshot packets

enque/snapshot_pck_ids = 1000000

 

# hidden parameter, see note 1850053 - ENSA suspended the ERS network connection

enque/ni_queue_size = 1000

 

#default: 1000 - parameter determines the number of processes that can be connected to the

#enqueue server. Set the parameter to the same value as the total number of work processes in the system

enque/server/max_clients = 5000

 

#new name for 'max_query_requests' is 'enque/server/query_block_count' since release 800

enque/server/max_query_requests = 5000

 

# default 1000 - maximum number of enqueue requests that can be processed simultaneously

enque/server/max_requests = 23600

 

#Max. number of subsequent asynchronous requests - a synchronous request is forced

#every n asynchronous request. You specify this number n in this parameter.

enque/async_req_max = 5000

 

#The number of I/O threads - a value higher than 4 has never resulted in an increase in throughput.

enque/server/threadcount = 8

 

#parameter specifies how many memory blocks (each has 32 KB) are reserved in the

#replication server for transferring the data

enque/server/query_block_count = 5000

 

#ENQ server name

rdisp/enqname = $(rdisp/myname)

 

#mechanism for communication between the threads - with value true, the communication

#is quicker but generates a heavy load on the system

enque/server/use_spinning=false

undocumented parameters which have to be tested by your own:

enque/server/req_block_size = 13333

enque/enrep/req_block_count = 14000

 

We have tested a lot with the threads and the snapshot size. That settings which you see above are the final setup. In the past we have a lot of ENQ time (up to 30%) in our mass processing batch. We could reduce this to about 1-2%.

This reduced the overall time for the massive parallism for about 40-60%!!! Nobody expected such a big benefit for this processes. But it depends on your application and the current ENQ time if the improvements can also take place in your environment.

 

Please analyze your ASCS and Application Server profile for this parameters. If you see a lot of ENQ time while analyzing with TX STAD or SE30, you should check the performance of your ENQ server.

This can be done with SM12 (OK Code: test/dudel). Please use the following note how to do this: 1320810 - Z_ENQUEUE_PERF

 

Here are an example of a small test system without application server (I will add some screenshots from the big environment in some weeks):

=> you can see that most of the requests are <= 1ms

=> more intresting is a an environment with more application servers in cause of the RTT (network round trip times)

 

Another indicator could be the report SAPLSENA. If you see in TX ST03 high times on this report you should take care of it. Also when you see the report a long time in SM66/SM50 this is an indicator of bad ENQ performance.

 

 

A good starting point for your analyze could be this blog on sap-perf.ca

 

and than => happy tuning

 

Details:

920979 - Out of memory im Standalone-Enqueue-Server

1850053 - ENSA suspended the ERS network connection

1903553 - Standalone Enqueue Server (ENSA) and snapshot packages

654744 - Several errors corrected in the standalone enqueue server

sap.help ENQ Server

 

If you have any further questions, don't hestate to comment the blog or contact me or one of my colleagues at Q-Partners ( [info_at_qpcm_dot_de | mailto:info@qpcm.de] )

 

Best Regards,

Jens Gleichmann

 

Technology Consultant at Q-Partners (www.qpcm.eu )

 

Edit History:

#V1.1 Added example of 1320810 - Z_ENQUEUE_PERF

SAP WEBGUI rendering issue with Internet Explorer

$
0
0

There are situations where you can not change browser (Internet Explorer) due to other dependent custom developed applications and compatibility issue. You want to use SAP webgui with lower version of IE and SAP_Basis 700 and faces webgui rendering issue while activating same.


Issue :

-SAP webgui rendering issue in ECC6 with lower version of internet explorer (IE8).

SAP WEB GUI.jpg

 

-Portal ESS links hangs on splash screen “SAP GUI for HTML” where backend system is ECC6 and browser is lower version of IE.

 

SAP ESS MSS.jpg

 

 

Analysis :


System detail - ECC6 with SAP_Basis 700 and Kernel 721 Ext, Portal NW7.3 with backend system as SAP_Basis 700 system.


While testing webgui and portal ESS in IE11, it works fine but with lower version of IE (ex. IE8, IE9) displays splash screen “SAP GUI for HTML” and keeps rendering.

Make sure you have below settings in your ECC/backend system-

  • ICM parameters (icm/server_port_0 and icm/host_name_full ) are set and active, check in SMICM.
  • All required services (/default_host/sap/bc/gui/sap/its/webgui ; /default_host/sap/public/bc/ur ;  /default_host/sap/public/bc/its/mimes) are active in SICF.
  • All required services are published through SE80 or tcode - SIAC_PUBLISH_ALL_INTERNAL.
  • Make sure you have latest supported version of SAP kernel (disp+work software) as per PAM.
  • Compatibility mode of IE is activated for all sites.

 

 

 

Solution :


Go to tcode : SICF

Browse service :  /default_host/sap/bc/gui/sap/its/webgui

Click on Display<-> Change mode.

In Service Data tab, click on GUI Configuration

SAP SICF Webgui .jpg

 

Add new parameter as below-

Parameter Name : ~WEBGUI_NEW_DESIGN  and Value : 1

SAP SICF parameter.jpg

 

Test SAP webgui or ESS now which works fine in IE8.

 

 

For more information refer below SAP notes-

1637287 - DCK: New design for SAP GUI for HTML in SAP_BASIS 700/701

1651937 - Integrated ITS 7.02: WEBGUI is not loading ("Please wait" message)

New Change Analysis (ABAP) Tool with ST-A/PI Release 01S*

$
0
0

Hello Community,

this is my first blog on SCN and I hope you will enjoy it.

 

I am going to present a new Tool - Change Analysis (ABAP) - to identify what Changes have been made in an SAP ABAP System.

 

Changes can be moved or made in totally different ways and affect different areas in an ABAP System.

 

With ST-A/PI Release 01S* SAP introduces a new toolset - Backoffice Tools - in ST13 which includes the Change Analysis (ABAP). With the new tool you can identify different kind of changes implemented in a SAP System – NetWeaver based, ABAP – during a given timeframe:

- Starttime & -date of Active Hosts

- OS/DB/SAP Parameter Changes

- Transport Requests - Imported from other System

- Transport Requests - Created in current System

- Change time & -date of Specific Program/Function Group/Class

 

The tool can be accessed directly in the ABAP System via transaction ST13 –> BACKOFFICE_TOOLS –> Change Analysis (ABAP)

 

Change Analysis (ABAP)_Selection Screen.png

 

We have also created a Knowledge Based Article 2223746 - Backoffice Tools – Change Analysis (ABAP) in ST13. All information on how to use the tool is documented in this KBA. Besides, we published an article as a short “How To Guide” on the SCN under the following link: http://scn.sap.com/docs/DOC-69342

 

Important Note/Constraint for the execution of the Change Analysis (ABAP) tool: Depending on the selected timeframe the required execution time of the tool can increase significantly. Please choose proper selection criteria to restrict the result set to a minimum – normally < 7 days should be sufficient. In maximum you can select a timeframe of 30 days only.

 

Feel free to try it out and let me know your feedback.


Best regards,

Julia

List of HTTP Status Code

$
0
0

Hi All,

 

We all might have come across some annoying, half explained error/status codes while processing HTTP request and than with this limited information available we have to go through internet to find the correct/close fitting problem to ours.

 

While doing so most of us don't know what exactly the error code tells us and thus the search filters are very limited and result is vast, so to have a better understanding over HTTP status code I have listed all of them below to help us understand what exactly they want to tell us.

 

Courtsey: https://en.wikipedia.org/wiki/List_of_HTTP_status_codes

 

There are five major category of HTTP status/error codes, listed below....

  • 1xx Informational
  • 2xx Success
  • 3xx Redirection
  • 4xx Client Error
  • 5xx Server Error


1xx Informational

Request received, continuing process.

This class of status code indicates a provisional response, consisting only of the Status-Line and optional headers, and is terminated by an empty line. Since HTTP/1.0 did not define any 1xx status codes, servers must not send a 1xx response to an HTTP/1.0 client except under experimental conditions.

 

100 Continue

This means that the server has received the request headers, and that the client should proceed to send the request body (in the case of a request for which a body needs to be sent; for example, a POST request). If the request body is large, sending it to a server when a request has already been rejected based upon inappropriate headers is inefficient. To have a server check if the request could be accepted based on the request's headers alone, a client must send Expect: 100-continue as a header in its initial request and check if a 100 Continue status code is received in response before continuing (or receive 417 Expectation Failed and not continue).

101 Switching Protocols

This means the requester has asked the server to switch protocols and the server is acknowledging that it will do so.

 

102 Processing

As a WebDAV request may contain many sub-requests involving file operations, it may take a long time to complete the request. This code indicates that the server has received and is processing the request, but no response is available yet. This prevents the client from timing out and assuming the request was lost.

 

 

2xx Success

This class of status codes indicates the action requested by the client was received, understood, accepted and processed successfully.

 

200 OK

Standard response for successful HTTP requests. The actual response will depend on the request method used. In a GET request, the response will contain an entity corresponding to the requested resource. In a POST request the response will contain an entity describing or containing the result of the action.


201 Created

The request has been fulfilled and resulted in a new resource being created.


202 Accepted

The request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place.

 

203 Non-Authoritative Information (since HTTP/1.1)

The server successfully processed the request, but is returning information that may be from another source.

 

204 No Content

The server successfully processed the request, but is not returning any content. Usually used as a response to a successful delete request.

205 Reset Content

The server successfully processed the request, but is not returning any content. Unlike a 204 response, this response requires that the requester reset the document view.

 

206 Partial Content

The server is delivering only part of the resource (byte serving) due to a range header sent by the client. The range header is used by tools like wget to enable resuming of interrupted downloads, or split a download into multiple simultaneous streams.

 

207 Multi-Status

The message body that follows is an XML message and can contain a number of separate response codes, depending on how many sub-requests were made.

 

208 Already Reported

The members of a DAV binding have already been enumerated in a previous reply to this request, and are not being included again.

 

226 IM Used

The server has fulfilled a request for the resource, and the response is a representation of the result of one or more instance-manipulations applied to the current instance.

 

 

3xx Redirection

This class of status code indicates the client must take additional action to complete the request. Many of these status codes are used in URL redirection.

A user agent may carry out the additional action with no user interaction only if the method used in the second request is GET or HEAD. A user agent should not automatically redirect a request more than five times, since such re-directions usually indicate an infinite loop.

 

300 Multiple Choices

Indicates multiple options for the resource that the client may follow. It, for instance, could be used to present different format options for video, list files with different extensions, or word sense disambiguation.

 

301 Moved Permanently

This and all future requests should be directed to the given URL.

 

302 Found

This is an example of industry practice contradicting the standard. The HTTP/1.0 specification (RFC 1945) required the client to perform a temporary redirect (the original describing phrase was "Moved temporarily"), but popular browsers implemented 302 with the functionality of a 303 See Other. Therefore, HTTP/1.1 added status codes 303 and 307 to distinguish between the two behaviors. However, some Web applications and frameworks use the 302 status code as if it were the 303.

 

303 See Other (since HTTP/1.1)

The response to the request can be found under another URL using a GET method. When received in response to a POST (or PUT/DELETE), it should be assumed that the server has received the data and the redirect should be issued with a separate GET message.

 

304 Not Modified

Indicates that the resource has not been modified since the version specified by the request headers If-Modified-Since or If-None-Match. This means that there is no need to re-transmit the resource, since the client still has a previously-downloaded copy.

 

305 Use Proxy (since HTTP/1.1)

The requested resource is only available through a proxy, whose address is provided in the response. Many HTTP clients (such as Mozilla and Internet Explorer) do not correctly handle responses with this status code, primarily for security reasons.

 

306 Switch Proxy

No longer used. Originally meant "Subsequent requests should use the specified proxy."

 

307 Temporary Redirect (since HTTP/1.1)

In this case, the request should be repeated with another URI; however, future requests should still use the original URI. In contrast to how 302 were historically implemented, the request method is not allowed to be changed when reissuing the original request. For instance, a POST request should be repeated using another POST request.

 

308 Permanent Redirect

The request, and all future requests should be repeated using another URI. 307 and 308 (as proposed) parallel the behaviours of 302 and 301, but do not allow the HTTP method to change. So, for example, submitting a form to a permanently redirected resource may continue smoothly.

 

 

4xx Client Error

The 4xx class of status code is intended for cases in which the client seems to have errored. Except when responding to a HEAD request, the server should include an entity containing an explanation of the error situation, and whether it is a temporary or permanent condition. These status codes are applicable to any request method. User agents should display any included entity to the user.

 

400 Bad Request

The server cannot or will not process the request due to something that is perceived to be a client error.

 

401 Unauthorized

Similar to 403 Forbidden, but specifically for use when authentication is required and has failed or has not yet been provided. The response must include a WWW-Authenticate header field containing a challenge applicable to the requested resource. See Basic access authentication and Digest access authentication.

 

402 Payment Required

Reserved for future use. The original intention was that this code might be used as part of some form of digital cash or micropayment scheme, but that has not happened, and this code is not usually used. YouTube uses this status if a particular IP address has made excessive requests, and requires the person to enter a CAPTCHA.

 

403 Forbidden

The request was a valid request, but the server is refusing to respond to it. Unlike a 401 Unauthorized response, authenticating will make no difference.

 

404 Not Found

The requested resource could not be found but may be available again in the future. Subsequent requests by the client are permissible.

 

405 Method Not Allowed

A request was made of a resource using a request method not supported by that resource; for example, using GET on a form which requires data to be presented via POST, or using PUT on a read-only resource.

 

406 Not Acceptable

The requested resource is only capable of generating content not acceptable according to the Accept headers sent in the request.

 

407 Proxy Authentication Required

The client must first authenticate itself with the proxy.

 

408 Request Timeout

The server timed out waiting for the request. According to HTTP specifications: "The client did not produce a request within the time that the server was prepared to wait. The client MAY repeat the request without modifications at any later time."

 

409 Conflict

Indicates that the request could not be processed because of conflict in the request, such as an edit conflict in the case of multiple updates.

 

410 Gone

Indicates that the resource requested is no longer available and will not be available again. This should be used when a resource has been intentionally removed and the resource should be purged. Upon receiving a 410 status code, the client should not request the resource again in the future. Clients such as search engines should remove the resource from their indices. Most use cases do not require clients and search engines to purge the resource, and a "404 Not Found" may be used instead.

 

411 Length Required

The request did not specify the length of its content, which is required by the requested resource.

 

412 Precondition Failed

The server does not meet one of the preconditions that the requester put on the request.

 

413 Request Entity Too Large

The request is larger than the server is willing or able to process.

 

414 Request-URL Too Long

The URL provided was too long for the server to process. Often the result of too much data being encoded as a query-string of a GET request, in which case it should be converted to a POST request.

 

415 Unsupported Media Type

The request entity has a media type which the server or resource does not support. For example, the client uploads an image as image/svg+xml, but the server requires that images use a different format.

 

416 Requested Range Not Satisfiable

The client has asked for a portion of the file (byte serving), but the server cannot supply that portion. For example, if the client asked for a part of the file that lies beyond the end of the file.

 

417 Expectation Failed

The server cannot meet the requirements of the Expect request-header field.

 

418 I'm a teapot (RFC 2324)

This code was defined in 1998 as one of the traditional IETF April Fools' jokes, in RFC 2324, Hyper Text Coffee Pot Control Protocol, and is not expected to be implemented by actual HTTP servers.

 

419 Authentication Timeout

Not a part of the HTTP standard, 419 Authentication Timeout denotes that previously valid authentication has expired. It is used as an alternative to 401 Unauthorized in order to differentiate from otherwise authenticated clients being denied access to specific server resources

 

420 Method Failure (Spring Framework)

Not part of the HTTP standard, but defined by spring in the HttpStatus class to be used when a method failed. This status code is deprecated by spring.

 

420 Enhance Your Calm (Twitter)

Not part of the HTTP standard, but returned by version 1 of the Twitter Search and Trends API when the client is being rate limited. Other services may wish to implement the 429 Too Many Requests response code instead.

 

422 Un-processable Entity

The request was well-formed but was unable to be followed due to semantic errors

 

423 Locked

The resource that is being accessed is locked.

 

424 Failed Dependency (WebDAV; RFC 4918)

The request failed due to failure of a previous request (e.g., a PROPPATCH).

 

426 Upgrade Required

The client should switch to a different protocol such as TLS/1.0.

 

428 Precondition Required

The origin server requires the request to be conditional. Intended to prevent "the 'lost update' problem, where a client GETs a resource's state, modifies it, and PUTs it back to the server, when meanwhile a third party has modified the state on the server, leading to a conflict."

 

429 Too Many Requests

The user has sent too many requests in a given amount of time. Intended for use with rate limiting schemes.

 

431 Request Header Fields Too Large

The server is unwilling to process the request because either an individual header field or all the header fields collectively, are too large.

 

440 Login Timeout (Microsoft)

A Microsoft extension. Indicates that your session has expired.

 

444 No Response (Nginx)

Used in Nginx logs to indicate that the server has returned no information to the client and closed the connection (useful as a deterrent for malware).

 

449 Retry With (Microsoft)

A Microsoft extension. The request should be retried after performing the appropriate action.

Often search-engines or custom applications will ignore required parameters. Where no default action is appropriate, the Aviongoo website sends a "HTTP/1.1 449 Retry with valid parameters: param1, param2 . . .” response. The applications may choose to learn, or not.

 

450 Blocked by Windows Parental Controls (Microsoft)

A Microsoft extension. This error is given when Windows Parental Controls are turned on and are blocking access to the given webpage.

451 Unavailable For Legal Reasons (Internet draft)

Defined in the internet draft "A New HTTP Status Code for Legally-restricted Resources" Intended to be used when resource access is denied for legal reasons, e.g. censorship or government-mandated blocked access. A reference to the 1953 dystopian novel Fahrenheit 451, where books are outlawed

 

451 Redirect (Microsoft)

Used in Exchange ActiveSync if there either is a more efficient server to use or the server cannot access the users' mailbox.

The client is supposed to re-run the HTTP Autodiscovery protocol to find a better suited server

 

494 Request Header Too Large (Nginx)

Nginx internal code similar to 431 but it was introduced earlier in version 0.9.4 (on January 21, 2011)

 

495 Cert Error (Nginx)

Nginx internal code used when SSL client certificate error occurred to distinguish it from 4XX in a log and an error page redirection.

 

496 No Cert (Nginx)

Nginx internal code used when client didn't provide certificate to distinguish it from 4XX in a log and an error page redirection.

 

497 HTTP to HTTPS (Nginx)

Nginx internal code used for the plain HTTP requests that are sent to HTTPS port to distinguish it from 4XX in a log and an error page redirection.

 

498 Token expired/invalid (Esri)

Returned by ArcGIS for Server. A code of 498 indicates an expired or otherwise invalid token.

 

499 Client Closed Request (Nginx)

Used in Nginx logs to indicate when the connection has been closed by client while the server is still processing its request, making server unable to send a status code back.

 

499 Token required (Esri)

Returned by ArcGIS for Server. A code of 499 indicates that a token is required (if no token was submitted).

 

 

5xx Server Error

The server failed to fulfill an apparently valid request.

Response status codes beginning with the digit "5" indicate cases in which the server is aware that it has encountered an error or is otherwise incapable of performing the request. Except when responding to a HEAD request, the server should include an entity containing an explanation of the error situation, and indicate whether it is a temporary or permanent condition. Likewise, user agents should display any included entity to the user. These response codes are applicable to any request method.

 

500 Internal Server Error

A generic error message, given when an unexpected condition was encountered and no more specific message is suitable.

 

501 Not Implemented

The server either does not recognize the request method, or it lacks the ability to fulfill the request. Usually this implies future availability (e.g., a new feature of a web-service API).

 

502 Bad Gateway

The server was acting as a gateway or proxy and received an invalid response from the upstream server.

 

503 Service Unavailable

The server is currently unavailable (because it is overloaded or down for maintenance). Generally, this is a temporary state.

 

504 Gateway Timeout

The server was acting as a gateway or proxy and did not receive a timely response from the upstream server.

 

505 HTTP Version Not Supported

The server does not support the HTTP protocol version used in the request.

 

506 Variant Also Negotiates

Transparent content negotiation for the request results in a circular reference.

 

507 Insufficient Storage

The server is unable to store the representation needed to complete the request.

508 Loop Detected (WebDAV; RFC 5842)

The server detected an infinite loop while processing the request (sent in lieu of 208 Already Reported).

 

509 Bandwidth Limit Exceeded (Apache bw/limited extension)

This status code is not specified in any RFCs. Its use is unknown.

 

510 Not Extended

Further extensions to the request are required for the server to fulfil it.

 

511 Network Authentication Required

The client needs to authenticate to gain network access. Intended for use by intercepting proxies used to control access to the network (e.g., "captive portals" used to require agreement to Terms of Service before granting full Internet access via a Wi-Fi hotspot).

 

520 Origin Error (CloudFlare)

This status code is not specified in any RFCs, but is used by CloudFlare's reverse proxies to signal an "unknown connection issue between CloudFlare and the origin web server" to a client in front of the proxy.

 

521 Web server is down (CloudFlare)

This status code is not specified in any RFCs, but is used by CloudFlare’s reverse proxies to indicate that the origin webserver refused the connection.

 

522 Connection timed out (CloudFlare)

This status code is not specified in any RFCs, but is used by CloudFlare’s reverse proxies to signal that a server connection timed out.

 

523 Proxy Declined Request (CloudFlare)

This status code is not specified in any RFCs, but is used by CloudFlare’ s reverse proxies to signal a resource that has been blocked by the administrator of the website or proxy itself.

 

524 A timeout occurred (CloudFlare)

This status code is not specified in any RFCs, but is used by cloud Flare’s reverse proxies to signal a network read timeout behind the proxy to a client in front of the proxy.

 

598 Network read timeout error (Unknown)

This status code is not specified in any RFCs, but is used by Microsoft HTTP proxies to signal a network read timeout behind the proxy to a client in front of the proxy.

 

599 Network connect timeout error (Unknown)

This status code is not specified in any RFCs, but is used by Microsoft HTTP proxies to signal a network connect timeout behind the proxy to a client in front of the proxy.

 

Hope this benefits you all.

 

 

Regards,

Prithviraj.

Viewing all 185 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>