ROME

Paper: Locating and Editing Factual Associations in GPT

execute_rome() -> Dict[str, Tuple[torch.Tensor]]

Execution function: Execute the ROME update algorithm for the specified update at the specified layer

  • Paramters

    • model(PreTrainedModel): model to be edited

    • tok(PreTrainedTokenizer): tokenizer for inputs

    • requests(List[Dict]): The edit descriptors and targets.

    • hparams(Hyperparams): hyperparameters for editing method

  • Return Type

    • delta(Dict[str, Tuple[torch.Tensor]]): new delta weights

apply_rome_to_model()-> PreTrainedModel

Main function: Given the request, it applies ROME to your model. Return the changed weights of the model.

  • Paramters

    • model(PreTrainedModel): model to be edited

    • tok(PreTrainedTokenizer): tokenizer for inputs

    • requests(List[Dict]): The edit descriptors and targets.

    • hparams(Hyperparams): hyperparameters for editing method

    • copy(bool): whether to copy original model

    • return_orig_weights(bool): whether to return the weights of original model

    • keep_original_weight(bool): whether to edit sequentially

      • False: edit sequentially(because the original weight is not maintained after each edit)

      • True: not edit sequentially

  • Return Type

    • edited_model(PreTrainedModel): model weights after editing

Example

Last updated